query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 4
100
| subset
stringclasses 7
values |
---|---|---|---|---|
5502ac6afe49f9787638cc9271788479 | Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter | [
{
"docid": "843968fe4adff16e160c75105505db66",
"text": "As user-generated Web content increases, the amount of inappropriate and/or objectionable content also grows. Several scholarly communities are addressing how to detect and manage such content: research in computer vision focuses on detection of inappropriate images, natural language processing technology has advanced to recognize insults. However, profanity detection systems remain flawed. Current list-based profanity detection systems have two limitations. First, they are easy to circumvent and easily become stale - that is, they cannot adapt to misspellings, abbreviations, and the fast pace of profane slang evolution. Secondly, they offer a one-size fits all solution; they typically do not accommodate domain, community and context specific needs. However, social settings have their own normative behaviors - what is deemed acceptable in one community may not be in another. In this paper, through analysis of comments from a social news site, we provide evidence that current systems are performing poorly and evaluate the cases on which they fail. We then address community differences regarding creation/tolerance of profanity and suggest a shift to more contextually nuanced profanity detection systems.",
"title": ""
},
{
"docid": "726d0b31638e945b2620eca6824b84dd",
"text": "Profanity detection is often thought to be an easy task. However, past work has shown that current, list-based systems are performing poorly. They fail to adapt to evolving profane slang, identify profane terms that have been disguised or only partially censored (e.g., @ss, f$#%) or intentionally or unintentionally misspelled (e.g., biatch, shiiiit). For these reasons, they are easy to circumvent and have very poor recall. Secondly, they are a one-size fits all solution – making assumptions that the definition, use and perceptions of profane or inappropriate holds across all contexts. In this article, we present work that attempts to move beyond list-based profanity detection systems by identifying the context in which profanity occurs. The proposed system uses a set of comments from a social news site labeled by Amazon Mechanical Turk workers for the presence of profanity. This system far surpasses the performance of listbased profanity detection techniques. The use of crowdsourcing in this task suggests an opportunity to build profanity detection systems tailored to sites and communities.",
"title": ""
}
] | [
{
"docid": "c63d32013627d0bcea22e1ad62419e62",
"text": "According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine the development process of a major open source application, the Apache web server. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution interval for this OSS project. This analysis reveals a unique process, which performs well on important measures. We conclude that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes.",
"title": ""
},
{
"docid": "c6d84be944630cec1b19d84db2ace2ee",
"text": "This paper describes an effort to model a student’s changing knowledge state during skill acquisition. Dynamic Bayes Nets (DBNs) provide a powerful way to represent and reason about uncertainty in time series data, and are therefore well-suited to model student knowledge. Many general-purpose Bayes net packages have been implemented and distributed; however, constructing DBNs often involves complicated coding effort. To address this problem, we introduce a tool called BNTSM. BNT-SM inputs a data set and a compact XML specification of a Bayes net model hypothesized by a researcher to describe causal relationships among student knowledge and observed behavior. BNT-SM generates and executes the code to train and test the model using the Bayes Net Toolbox [1]. Compared to the BNT code it outputs, BNT-SM reduces the number of lines of code required to use a DBN by a factor of 5. In addition to supporting more flexible models, we illustrate how to use BNT-SM to simulate Knowledge Tracing (KT) [2], an established technique for student modeling. The trained DBN does a better job of modeling and predicting student performance than the original KT code (Area Under Curve = 0.610 > 0.568), due to differences in how it estimates parameters.",
"title": ""
},
{
"docid": "590a44ab149b88e536e67622515fdd08",
"text": "Chitosan is considered to be one of the most promising and applicable materials in adsorption applications. The existence of amino and hydroxyl groups in its molecules contributes to many possible adsorption interactions between chitosan and pollutants (dyes, metals, ions, phenols, pharmaceuticals/drugs, pesticides, herbicides, etc.). These functional groups can help in establishing positions for modification. Based on the learning from previously published works in literature, researchers have achieved a modification of chitosan with a number of different functional groups. This work summarizes the published works of the last three years (2012-2014) regarding the modification reactions of chitosans (grafting, cross-linking, etc.) and their application to adsorption of different environmental pollutants (in liquid-phase).",
"title": ""
},
{
"docid": "8d7b5be74cb66d3f8e639fc96ba58692",
"text": "The aim of this paper is to discuss the significance and potential of a mixed methods approach in technology acceptance research. After critically reviewing the dominance of the quantitative survey method in TAMbased research, this paper reports a mixed methods study of user acceptance of emergency alert technology in order to illustrate the benefits of combining qualitative and quantitative techniques in a single study. The main conclusion is that a mixed methods approach provides opportunities to move beyond the vague conceptualizations of “usefulness” and “ease of use” and to advance our understanding of user acceptance of technology in context.",
"title": ""
},
{
"docid": "cbcf4ca356682ee9c09b87fa1cd26ba2",
"text": "The field of data analytics is currently going through a renaissance as a result of ever-increasing dataset sizes, the value of the models that can be trained from those datasets, and a surge in flexible, distributed programming models. In particular, the Apache Hadoop and Spark programming systems, as well as their supporting projects (e.g. HDFS, SparkSQL), have greatly simplified the analysis and transformation of datasets whose size exceeds the capacity of a single machine. While these programming models facilitate the use of distributed systems to analyze large datasets, they have been plagued by performance issues. The I/O performance bottlenecks of Hadoop are partially responsible for the creation of Spark. Performance bottlenecks in Spark due to the JVM object model, garbage collection, interpreted/managed execution, and other abstraction layers are responsible for the creation of additional optimization layers, such as Project Tungsten. Indeed, the Project Tungsten issue tracker states that the \"majority of Spark workloads are not bottlenecked by I/O or network, but rather CPU and memory\".\n In this work, we address the CPU and memory performance bottlenecks that exist in Apache Spark by accelerating user-written computational kernels using accelerators. We refer to our approach as Spark With Accelerated Tasks (SWAT). SWAT is an accelerated data analytics (ADA) framework that enables programmers to natively execute Spark applications on high performance hardware platforms with co-processors, while continuing to write their applications in a JVM-based language like Java or Scala. Runtime code generation creates OpenCL kernels from JVM bytecode, which are then executed on OpenCL accelerators. In our work we emphasize 1) full compatibility with a modern, existing, and accepted data analytics platform, 2) an asynchronous, event-driven, and resource-aware runtime, 3) multi-GPU memory management and caching, and 4) ease-of-use and programmability. Our performance evaluation demonstrates up to 3.24x overall application speedup relative to Spark across six machine learning benchmarks, with a detailed investigation of these performance improvements.",
"title": ""
},
{
"docid": "371d9b5fea9b1b70311cf12a3280ab66",
"text": "Cereal products fermented by lactic acid bacteria are documented first in Egypt and Iraq during 2000-3000 B.C. These are one of the oldest fermented foods. In 1907, Elie Metcnikoff was the first scientist who not only observes but also put forward the scientific basics of fermentation. Then to explore gut bacteria intensive researches were made in late 1940s. In 2006 FAO and WHO give the complete definition of probiotics, living microbes beneficial for health provided in feed. For treatment of Coccidiosis probiotic combinations of different microbes such as Lactobacillus, Bifidibacterium and Streptococcus are used now days. Coccidiosis, a parasitic disease mainly of poultry sector, caused by Eimeria specie’s. Coccidiosis causes serious damage to the intestinal epithelium resulting in diarrhea. This problem can be effectively controlled by the use of feed probiotics.",
"title": ""
},
{
"docid": "9888a7723089d2f1218e6e1a186a5e91",
"text": "This classic text offers you the key to understanding short circuits, open conductors and other problems relating to electric power systems that are subject to unbalanced conditions. Using the method of symmetrical components, acknowledged expert Paul M. Anderson provides comprehensive guidance for both finding solutions for faulted power systems and maintaining protective system applications. You'll learn to solve advanced problems, while gaining a thorough background in elementary configurations. Features you'll put to immediate use: Numerous examples and problems Clear, concise notation Analytical simplifications Matrix methods applicable to digital computer technology Extensive appendices",
"title": ""
},
{
"docid": "893f3d5ab013a9c156139ef2626b7053",
"text": "Intelligent systems capable of automatically understanding natural language text are important for many artificial intelligence applications including mobile phone voice assistants, computer vision, and robotics. Understanding language often constitutes fitting new information into a previously acquired view of the world. However, many machine reading systems rely on the text alone to infer its meaning. In this paper, we pursue a different approach; machine reading methods that make use of background knowledge to facilitate language understanding. To this end, we have developed two methods: The first method addresses prepositional phrase attachment ambiguity. It uses background knowledge within a semi-supervised machine learning algorithm that learns from both labeled and unlabeled data. This approach yields state-of-the-art results on two datasets against strong baselines; The second method extracts relationships from compound nouns. Our knowledge-aware method for compound noun analysis accurately extracts relationships and significantly outperforms a baseline that does not make use of background knowledge.",
"title": ""
},
{
"docid": "a395993ce7fb6fa144b79364724cd7dc",
"text": "High cesarean birth rates are an issue of international public health concern.1 Worries over such increases have led the World Health Organization to advise that Cesarean Section (CS) rates should not be more than 15%,2 with some evidence that CS rates above 15% are not associated with additional reduction in maternal and neonatal mortality and morbidity.3 Analyzing CS rates in different countries, including primary vs. repeat CS and potential reasons of these, provide important insights into the solution for reducing the overall CS rate. Robson,4 proposed a new classification system, the Robson Ten-Group Classification System to allow critical analysis according to characteristics of pregnancy (Table 1). The characteristics used are: (i) single or multiple pregnancy (ii) nulliparous, multiparous, or multiparous with a previous CS (iii) cephalic, breech presentation or other malpresentation (iv) spontaneous or induced labor (v) term or preterm births.",
"title": ""
},
{
"docid": "ed414502134a7423af6b54f17db72e8e",
"text": "Chatbots have been used in different scenarios for getting people interested in CS for decades. However, their potential for teaching basic concepts and their engaging effect has not been measured. In this paper we present a software platform called Chatbot designed to foster engagement while teaching basic CS concepts such as variables, conditionals and finite state automata, among others. We carried out two experiences using Chatbot and the well known platform Alice: 1) an online nation-wide competition, and 2) an in-class 15-lesson pilot course in 2 high schools. Data shows that retention and girl interest are higher with Chatbot than with Alice, indicating student engagement.",
"title": ""
},
{
"docid": "eabd58fbd89cba84d3d4fa117bcd84b5",
"text": "Today, peer-to-peer (p2p) networks have risen to the top echelon of information sharing on the Internet. It is a daunting task to prevent sharing of both legitimate and illegitimate information such as – music, movies, software, and child pornography – on p2p overt channels. Considering that, preventing covert channel information sharing is inconceivable given even its detection is near impossible. In this paper, we describe SURREAL – a technique for covert communication over the very popular p2p BitTorrent protocol. Standard BitTorrent protocol uses a 3-step handshake process and as such does not provide peer authentication service. In SURREAL, we have extended the standard handshake to a 6-step authenticated covert handshake to provide peer authentication service and robust peer anonymity with one way functions. After authenticating a potential covert partner, participating peers send data over an encrypted covert channel using one a standard BitTorrent message type. We have also SURREL’s security robustness to potential attacks. Finally, we have validated SURREAL’s performance and presented results comparing it with [4] and [5]. Keywords—Authentication, BitTorrent, covert channel, handshake, information hiding, p2p networks, security, steganography.",
"title": ""
},
{
"docid": "2e4c32f6a6322280f1d961cc90515fed",
"text": "The role of CMOS Image Sensors since their birth around the 1960s, has been changing a lot. Unlike the past, current CMOS Image Sensors are becoming competitive with regard to Charged Couple Device (CCD) technology. They offer many advantages with respect to CCD, such as lower power consumption, lower voltage operation, on-chip functionality and lower cost. Nevertheless, they are still too noisy and less sensitive than CCDs. Noise and sensitivity are the key-factors to compete with industrial and scientific CCDs. It must be pointed out also that there are several kinds of CMOS Image sensors, each of them to satisfy the huge demand in different areas, such as Digital photography, industrial vision, medical and space applications, electrostatic sensing, automotive, instrumentation and 3D vision systems. In the wake of that, a lot of research has been carried out, focusing on problems to be solved such as sensitivity, noise, power consumption, voltage operation, speed imaging and dynamic range. In this paper, CMOS Image Sensors are reviewed, providing information on the latest advances achieved, their applications, the new challenges and their limitations. In conclusion, the State-of-the-art of CMOS Image Sensors. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0ba036ae72811c02179842f1949974b6",
"text": "The authors propose a new climatic drought index: the standardized precipitation evapotranspiration index (SPEI). The SPEI is based on precipitation and temperature data, and it has the advantage of combining multiscalar character with the capacity to include the effects of temperature variability on drought assessment. The procedure to calculate the index is detailed and involves a climatic water balance, the accumulation of deficit/surplus at different time scales, and adjustment to a log-logistic probability distribution. Mathematically, the SPEI is similar to the standardized precipitation index (SPI), but it includes the role of temperature. Because the SPEI is based on a water balance, it can be compared to the self-calibrated Palmer drought severity index (sc-PDSI). Time series of the three indices were compared for a set of observatories with different climate characteristics, located in different parts of the world. Under global warming conditions, only the sc-PDSI and SPEI identified an increase in drought severity associated with higher water demand as a result of evapotranspiration. Relative to the sc-PDSI, the SPEI has the advantage of being multiscalar, which is crucial for drought analysis and monitoring.",
"title": ""
},
{
"docid": "14b0f4542d34a114fd84f14d1f0b53e8",
"text": "Selection the ideal mate is the most confusing process in the life of most people. To explore these issues to examine differences under graduates socio-economic status have on their preference of marriage partner selection in terms of their personality traits, socio-economic status and physical attractiveness. A total of 770 respondents participated in this study. The respondents were mainly college students studying in final year degree in professional and non professional courses. The result revealed that the respondents socio-economic status significantly influence preferences in marriage partners selection in terms of personality traits, socio-economic status and physical attractiveness.",
"title": ""
},
{
"docid": "3a301b11b704e34af05c9072d8353696",
"text": "Attention-deficit hyperactivity disorder (ADHD) is typically characterized as a disorder of inattention and hyperactivity/impulsivity but there is increasing evidence of deficits in motivation. Using positron emission tomography (PET), we showed decreased function in the brain dopamine reward pathway in adults with ADHD, which, we hypothesized, could underlie the motivation deficits in this disorder. To evaluate this hypothesis, we performed secondary analyses to assess the correlation between the PET measures of dopamine D2/D3 receptor and dopamine transporter availability (obtained with [11C]raclopride and [11C]cocaine, respectively) in the dopamine reward pathway (midbrain and nucleus accumbens) and a surrogate measure of trait motivation (assessed using the Achievement scale on the Multidimensional Personality Questionnaire or MPQ) in 45 ADHD participants and 41 controls. The Achievement scale was lower in ADHD participants than in controls (11±5 vs 14±3, P<0.001) and was significantly correlated with D2/D3 receptors (accumbens: r=0.39, P<0.008; midbrain: r=0.41, P<0.005) and transporters (accumbens: r=0.35, P<0.02) in ADHD participants, but not in controls. ADHD participants also had lower values in the Constraint factor and higher values in the Negative Emotionality factor of the MPQ but did not differ in the Positive Emotionality factor—and none of these were correlated with the dopamine measures. In ADHD participants, scores in the Achievement scale were also negatively correlated with symptoms of inattention (CAARS A, E and SWAN I). These findings provide evidence that disruption of the dopamine reward pathway is associated with motivation deficits in ADHD adults, which may contribute to attention deficits and supports the use of therapeutic interventions to enhance motivation in ADHD.",
"title": ""
},
{
"docid": "7f23d9ff9be0ee2c9a3eea7db44331db",
"text": "1. General orientation of the volume: towards an empirical revolution The collective volume Cognitive Linguistics: Current Applications and Future Perspectives brings together specific case studies and critical overviews of work in a variety of CL strands. Written by prominent researchers, the chapters of the volume thus provide the scientific community with an updated survey of recent research in Cognitive Linguistics. Most authors furthermore go beyond the more immediate scope of describing or exemplifying state-of-the-art research (e.g. by providing the reader with a generally accessible synthesis or a specialized case study) and explicitly address a number of pressing questions pertaining to future perspectives and future research agendas. Together with its companion volume Cognitive Linguistics: Basic Readings (Cognitive Linguistics Research 34, edited by Dirk Geeraerts), it constitutes a highly informative resource for linguists and scholars in neighbouring disciplines, and in general for any scholar wishing to become familiar with what Cognitive Linguistics is all about. Whereas Cognitive Linguistics: Basic Readings offers an introductory survey of the foun-dational concepts of Cognitive Linguistics, the present volume focuses on more recent theoretical developments, illustrates the many fields of application that CL already covers (both within linguistics and in an interdis-ciplinary environment), and identifies the future research trends that CL is now heading for. At the same time, the present volume is the very first issue in the new book series Applications of Cognitive Linguistics (ACL). In collaboration with its sister series Cognitive Linguistics Research, ACL offers a platform for high quality work which applies the rich framework developed in Cog-nitive Linguistics to a wide range of different fields of application. These 2 and still many other fields, often within an interdisciplinary framework. The goals of ACL will be summarised in section 3 of this introduction. First and foremost, however, as the subtitle suggests, the volume overviews and explores the major avenues of the cognitive linguistic enterprise at present and towards the future. Over the last two, perhaps even three, decades, Cognitive Linguistics has gradually but firmly established itself as a complete and innovating discipline, but certainly not one which for these reasons has ceased to evolve, nor to expand. The contributions in this volume testify to the existence of a number of different strands, the most important of which may be summarised as follows. First, a number of basic concepts (cf. the twelve cornerstones of Cognitive Linguistics described and exemplified in Cognitive Linguistics: Basic Readings) …",
"title": ""
},
{
"docid": "e92ae764c4ce9f7f7f1103d903bb53ec",
"text": "Applications and systems are constantly faced with decisions that require picking from a set of actions based on contextual information. Reinforcement-based learning algorithms such as contextual bandits can be very effective in these settings, but applying them in practice is fraught with technical debt, and no general system exists that supports them completely. We address this and create the first general system for contextual learning, called the Decision Service. Existing systems often suffer from technical debt that arises from issues like incorrect data collection and weak debuggability, issues we systematically address through our ML methodology and system abstractions. The Decision Service enables all aspects of contextual bandit learning using four system abstractions which connect together in a loop: explore (the decision space), log, learn, and deploy. Notably, our new explore and log abstractions ensure the system produces correct, unbiased data, which our learner uses for online learning and to enable real-time safeguards, all in a fully reproducible manner. The Decision Service has a simple user interface and works with a variety of applications: we present two live production deployments for content recommendation that achieved click-through improvements of 25-30%, another with 18% revenue lift in the landing page, and ongoing applications in tech support and machine failure handling. The service makes real-time decisions and learns continuously and scalably, while significantly lowering technical debt.",
"title": ""
},
{
"docid": "d771693809e966adc3656f58855fdda0",
"text": "A wide variety of crystalline nanowires (NWs) with outstanding mechanical properties have recently emerged. Measuring their mechanical properties and understanding their deformation mechanisms are of important relevance to many of their device applications. On the other hand, such crystalline NWs can provide an unprecedented platform for probing mechanics at the nanoscale. While challenging, the field of experimental mechanics of crystalline nanowires has emerged and seen exciting progress in the past decade. This review summarizes recent advances in this field, focusing on major experimental methods using atomic force microscope (AFM) and electron microscopes and key results on mechanics of crystalline nanowires learned from such experimental studies. Advances in several selected topics are discussed including elasticity, fracture, plasticity, and anelasticity. Finally, this review surveys some applications of crystalline nanowires such as flexible and stretchable electronics, nanocomposites, nanoelectromechanical systems (NEMS), energy harvesting and storage, and strain engineering, where mechanics plays a key role. [DOI: 10.1115/1.4035511]",
"title": ""
},
{
"docid": "84d39e615b8b674cee53741f87a733da",
"text": "Cyber Bullying, which often has a deeply negative impact on the victim, has grown as a serious issue among adolescents. To understand the phenomenon of cyber bullying, experts in social science have focused on personality, social relationships and psychological factors involving both the bully and the victim. Recently computer science researchers have also come up with automated methods to identify cyber bullying messages by identifying bullying-related keywords in cyber conversations. However, the accuracy of these textual feature based methods remains limited. In this work, we investigate whether analyzing social network features can improve the accuracy of cyber bullying detection. By analyzing the social network structure between users and deriving features such as number of friends, network embeddedness, and relationship centrality, we find that the detection of cyber bullying can be significantly improved by integrating the textual features with social network features.",
"title": ""
},
{
"docid": "81537ba56a8f0b3beb29a03ed3c74425",
"text": "About ten years ago, soon after the Web’s birth, Web “search engines” were first by word of mouth. Soon, however, automated search engines became a world wide phenomenon, especially AltaVista at the beginning. I was pleasantly surprised by the amount and diversity of information made accessible by the Web search engines even in the mid 1990’s. The growth of the available Web pages is beyond most, if not all, people’s imagination. The search engines enabled people to find information, facts, and references among these Web pages.",
"title": ""
}
] | scidocsrr |
403412e381e1a55ab2b9f5adad799608 | Near-Duplicate Video Retrieval with Deep Metric Learning | [
{
"docid": "affc663476dc4d5299de5f89f67e5f5a",
"text": "Many machine learning algorithms, such as K Nearest Neighbor (KNN), heavily rely on the distance metric for the input data patterns. Distance Metric learning is to learn a distance metric for the input space of data from a given collection of pair of similar/dissimilar points that preserves the distance relation among the training data. In recent years, many studies have demonstrated, both empirically and theoretically, that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. This paper surveys the field of distance metric learning from a principle perspective, and includes a broad selection of recent work. In particular, distance metric learning is reviewed under different learning conditions: supervised learning versus unsupervised learning, learning in a global sense versus in a local sense; and the distance matrix based on linear kernel versus nonlinear kernel. In addition, this paper discusses a number of techniques that is central to distance metric learning, including convex programming, positive semi-definite programming, kernel learning, dimension reduction, K Nearest Neighbor, large margin classification, and graph-based approaches.",
"title": ""
},
{
"docid": "34a6fe0c5183f19d4f25a99b3bcd205e",
"text": "In this paper, we first offer an overview of advances in the field of distance metric learning. Then, we empirically compare selected methods using a common experimental protocol. The number of distance metric learning algorithms proposed keeps growing due to their effectiveness and wide application. However, existing surveys are either outdated or they focus only on a few methods. As a result, there is an increasing need to summarize the obtained knowledge in a concise, yet informative manner. Moreover, existing surveys do not conduct comprehensive experimental comparisons. On the other hand, individual distance metric learning papers compare the performance of the proposed approach with only a few related methods and under different settings. This highlights the need for an experimental evaluation using a common and challenging protocol. To this end, we conduct face verification experiments, as this task poses significant challenges due to varying conditions during data acquisition. In addition, face verification is a natural application for distance metric learning because the encountered challenge is to define a distance function that: 1) accurately expresses the notion of similarity for verification; 2) is robust to noisy data; 3) generalizes well to unseen subjects; and 4) scales well with the dimensionality and number of training samples. In particular, we utilize well-tested features to assess the performance of selected methods following the experimental protocol of the state-of-the-art database labeled faces in the wild. A summary of the results is presented along with a discussion of the insights obtained and lessons learned by employing the corresponding algorithms.",
"title": ""
}
] | [
{
"docid": "c491e39bbfb38f256e770d730a50b2e1",
"text": "Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.",
"title": ""
},
{
"docid": "d7c0d9e43f8f894fbe21154c2a26c3fd",
"text": "Decision tree classification (DTC) is a widely used technique in data mining algorithms known for its high accuracy in forecasting. As technology has progressed and available storage capacity in modern computers increased, the amount of data available to be processed has also increased substantially, resulting in much slower induction and classification times. Many parallel implementations of DTC algorithms have already addressed the issues of reliability and accuracy in the induction process. In the classification process, larger amounts of data require proportionately more execution time, thus hindering the performance of legacy systems. We have devised a pipelined architecture for the implementation of axis parallel binary DTC that dramatically improves the execution time of the algorithm while consuming minimal resources in terms of area. Scalability is achieved when connected to a high-speed communication unit capable of performing data transfers at a rate similar to that of the DTC engine. We propose a hardware accelerated solution composed of parallel processing nodes capable of independently processing data from a streaming source. Each engine processes the data in a pipelined fashion to use resources more efficiently and increase the achievable throughput. The results show that this system is 3.5 times faster than the existing hardware implementation of classification.",
"title": ""
},
{
"docid": "1eea81ad47613c7cd436af451aea904d",
"text": "The Internet of Things (IoT) brings together a large variety of devices of different platforms, computational capacities and functionalities. The network heterogeneity and the ubiquity of IoT devices introduce increased demands on both security and privacy protection. Therefore, the cryptographic mechanisms must be strong enough to meet these increased requirements but, at the same time, they must be efficient enough for the implementation on constrained devices. In this paper, we present a detailed assessment of the performance of the most used cryptographic algorithms on constrained devices that often appear in IoT networks. We evaluate the performance of symmetric primitives, such as block ciphers, hash functions, random number generators, asymmetric primitives, such as digital signature schemes, and privacyenhancing schemes on various microcontrollers, smart-cards and mobile devices. Furthermore, we provide the analysis of the usability of upcoming schemes, such as the homomorphic encryption schemes, group signatures and attribute-based schemes. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "87222f419605df6e1d63d60bd26c5343",
"text": "Video Games are boring when they are too easy and frustrating when they are too hard. While most singleplayer games allow players to adjust basic difficulty (easy, medium, hard, insane), their overall level of challenge is often static in the face of individual player input. This lack of flexibility can lead to mismatches between player ability and overall game difficulty. In this paper, we explore the computational and design requirements for a dynamic difficulty adjustment system. We present a probabilistic method (drawn predominantly from Inventory Theory) for representing and reasoning about uncertainty in games. We describe the implementation of these techniques, and discuss how the resulting system can be applied to create flexible interactive experiences that adjust on the fly. Introduction Video games are designed to generate engaging experiences: suspenseful horrors, whimsical amusements, fantastic adventures. But unlike films, books, or televised media which often have similar experiential goals video games are interactive. Players create meaning by interacting with the games internal systems. One such system is inventory the stock of items that a player collects and carries throughout the game world. The relative abundance or scarcity of inventory items has a direct impact on the players experience. As such, games are explicitly designed to manipulate the exchange of resources between world and player. [Simpson, 2001] This network of producer-consumer relationships can be viewed as an economy or more broadly, as a dynamic system [Castronova, 2000, Luenberger, 79]. 1 Inventory items for first-person shooters include health, weapons, ammunition and power-ups like shielding or temporary invincibility. 2 A surplus of ammunition affords experimentation and shoot first tactics, while limited access to recovery items (like health packs) will promote a more cautious approach to threatening situations. Game developers iteratively refine these systems based on play testing feedback tweaking behaviors and settings until the game is balanced. While balancing, they often analyze systems intuitively by tracking specific identifiable patterns or types of dynamic activity. It is a difficult and time consuming process [Rollings and Adams, 2003]. While game balancing and tuning cant be automated, directed mathematical analysis can reveal deeper structures and relationships within a game system. With the right tools, researchers and developers can calculate relationships in less time, with greater accuracy. In this paper, we describe a first step towards such tools. Hamlet is a Dynamic Difficulty Adjustment (DDA) system built using Valves Half Life game engine. Using techniques drawn from Inventory Theory and Operations Research, Hamlet analyzes and adjust the supply and demand of game inventory in order to control overall game difficulty.",
"title": ""
},
{
"docid": "f3e5941be4543d5900d56c1a7d93d0ea",
"text": "These working notes summarize the different approaches we have explored in order to classify a corpus of tweets related to the 2015 Spanish General Election (COSET 2017 task from IberEval 2017). Two approaches were tested during the COSET 2017 evaluations: Neural Networks with Sentence Embeddings (based on TensorFlow) and N-gram Language Models (based on SRILM). Our results with these approaches were modest: both ranked above the “Most frequent baseline”, but below the “Bag-of-words + SVM” baseline. A third approach was tried after the COSET 2017 evaluation phase was over: Advanced Linear Models (based on fastText). Results measured over the COSET 2017 Dev and Test show that this approach is well above the “TF-IDF+RF” baseline.",
"title": ""
},
{
"docid": "80425b563740c048d2126b849b23498f",
"text": "Automatic determination of synonyms and/or semantically related words has various applications in Natural Language Processing. Two mainstream paradigms to date, lexicon-based and distributional approaches, both exhibit pros and cons with regard to coverage, complexity, and quality. In this paper, we propose three novel methods—two rule-based methods and one machine learning approach—to identify synonyms from definition texts in a machinereadable dictionary. Extracted synonyms are evaluated in two extrinsic experiments and one intrinsic experiment. Evaluation results show that our pattern-based approach achieves best performance in one of the experiments and satisfactory results in the other, comparable to corpus-based state-of-the-art results.",
"title": ""
},
{
"docid": "05d8383eb6b1c6434f75849859c35fd0",
"text": "This paper proposes a robust approach for image based floor detection and segmentation from sequence of images or video. In contrast to many previous approaches, which uses a priori knowledge of the surroundings, our method uses combination of modified sparse optical flow and planar homography for ground plane detection which is then combined with graph based segmentation for extraction of floor from images. We also propose a probabilistic framework which makes our method adaptive to the changes in the surroundings. We tested our algorithm on several common indoor environment scenarios and were able to extract floor even under challenging circumstances. We obtained extremely satisfactory results in various practical scenarios such as where the floor and non floor areas are of same color, in presence of textured flooring, and where illumination changes are steep.",
"title": ""
},
{
"docid": "81919bc432dd70ed3e48a0122d91b9e4",
"text": "Artemisinin resistance in Plasmodium falciparum has emerged as a major threat for malaria control and elimination worldwide. Mutations in the Kelch propeller domain of PfK13 are the only known molecular markers for artemisinin resistance in this parasite. Over 100 non-synonymous mutations have been identified in PfK13 from various malaria endemic regions. This study aimed to investigate the genetic diversity of PvK12, the Plasmodium vivax ortholog of PfK13, in parasite populations from Southeast Asia, where artemisinin resistance in P. falciparum has emerged. The PvK12 sequences in 120 P. vivax isolates collected from Thailand (22), Myanmar (32) and China (66) between 2004 and 2008 were obtained and 353 PvK12 sequences from worldwide populations were retrieved for further analysis. These PvK12 sequences revealed a very low level of genetic diversity (π = 0.00003) with only three single nucleotide polymorphisms (SNPs). Of these three SNPs, only G581R is nonsynonymous. The synonymous mutation S88S is present in 3% (1/32) of the Myanmar samples, while G704G and G581R are present in 1.5% (1/66) and 3% (2/66) of the samples from China, respectively. None of the mutations observed in the P. vivax samples were associated with artemisinin resistance in P. falciparum. Furthermore, analysis of 473 PvK12 sequences from twelve worldwide P. vivax populations confirmed the very limited polymorphism in this gene and detected only five distinct haplotypes. The PvK12 sequences from global P. vivax populations displayed very limited genetic diversity indicating low levels of baseline polymorphisms of PvK12 in these areas.",
"title": ""
},
{
"docid": "62a8548527371acb657d9552ab41d699",
"text": "This paper proposes a novel dynamic gait of locomotion for hexapedal robots which enables them to crawl forward, backward, and rotate using a single actuator. The gait exploits the compliance difference between the two sides of the tripods, to generate clockwise or counter clockwise rotation by controlling the acceleration of the robot. The direction of turning depends on the configuration of the legs -tripod left of right- and the direction of the acceleration. Alternating acceleration in successive steps allows for continuous rotation in the desired direction. An analysis of the locomotion is presented as a function of the mechanical properties of the robot and the contact with the surface. A numerical simulation was performed for various conditions of locomotion. The results of the simulation and analysis were compared and found to be in excellent match.",
"title": ""
},
{
"docid": "9bda77af84249bdf4600b8f8617fced9",
"text": "Globalization is a key challenge to public health, especially in developing countries, but the linkages between globalization and health are complex. Although a growing amount of literature has appeared on the subject, it is piecemeal, and suffers from a lack of an agreed framework for assessing the direct and indirect health effects of different aspects of globalization. This paper presents a conceptual framework for the linkages between economic globalization and health, with the intention that it will serve as a basis for synthesizing existing relevant literature, identifying gaps in knowledge, and ultimately developing national and international policies more favourable to health. The framework encompasses both the indirect effects on health, operating through the national economy, household economies and health-related sectors such as water, sanitation and education, as well as more direct effects on population-level and individual risk factors for health and on the health care system. Proposed also is a set of broad objectives for a programme of action to optimize the health effects of economic globalization. The paper concludes by identifying priorities for research corresponding with the five linkages identified as critical to the effects of globalization on health.",
"title": ""
},
{
"docid": "57d6b4c717ce071c17a55c12a52bf53f",
"text": "College of Information Science and Engineering, Ritsumeikan University, 1-1-1 Noji Higashi, Kusatsu, Shiga 525-8577, Japan Department of Mechanical Engineering and Intelligent Systems, The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu-shi, Tokyo 182-8585, Japan Faculty of Computer Science and Systems Engineering, Okayama Prefectural University, 111 Kubogi, Soja-shi, Okayama 719-1197, Japan Department of Intermedia Art and Science, School of Fundamental Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo 169-8555, Japan Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), AIST Tsukuba Central 1, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8560, Japan",
"title": ""
},
{
"docid": "558f66ec89c03f99b1516ccde6791566",
"text": "In recent years, there has been a better understanding of the aging process. In addition to changes occurring in the skin envelope, significant changes occur in the subcutaneous fat and craniofacial skeleton. This has led to a paradigm shift in the therapeutic approach to facial rejuvenation. Along with soft tissue repositioning, volumizing the aging face has been found to optimize the result and achieve a more natural appearance. Early in the aging process, when there has not been a significant change to the face requiring surgical intervention, fillers alone can provide minimally invasive facial rejuvenation through volumizing. Multiple injectable soft tissue fillers and biostimulators are currently available to provide facial volume such as hyaluronic acid, calcium hydroxylapatite, poly-L-lactic acid, polymethyl methacrylate, and silicone. A discussion of the morphological changes seen in the aging face, the properties of these products, and key technical concepts will be highlighted to permit optimum results when performing facial volumizing of the upper, middle, and lower thirds of the face. These fillers can act as a dress rehearsal for these patients considering structural fat grafting.",
"title": ""
},
{
"docid": "c61470e2c1310a9c6fa09dc96659d4ab",
"text": "Selenium IDE Locating Elements There is a great responsibility for developers and testers to ensure that web software exhibits high reliability and speed. Somewhat recently, the software community has seen a rise in the usage of AJAX in web software development to achieve this goal. The advantage of AJAX applications is that they are typically very responsive. The vEOC is an Emergency Management Training application which requires this level of interactivity. Selenium is great in that it is an open source testing tool that can handle the amount of JavaScript present in AJAX applications, and even gives the tester the freedom to add their own features. Since web software is so frequently modified, the main goal for any test developer is to create sustainable tests. How can Selenium tests be made more maintainable?",
"title": ""
},
{
"docid": "134d2671fa44793c8969acb50c71c5c0",
"text": "OBJECTIVES\nTransferrin is a glycosylated protein responsible for transporting iron, an essential metal responsible for proper fetal development. Tobacco is a heavily used xenobiotic having a negative impact on the human body and pregnancy outcomes. Aims of this study was to examine the influence of tobacco smoking on transferrin sialic acid residues and their connection with fetal biometric parameters in women with iron-deficiency.\n\n\nMETHODS\nThe study involved 173 samples from pregnant women, smokers and non-smokers, iron deficient and not. Transferrin sialylation was determined by capillary electrophoresis. The cadmium (Cd) level was measured by atomic absorption and the sialic acid concentration by the resorcinol method.\n\n\nRESULTS\nWomen with iron deficiencies who smoked gave birth earlier than non-smoking, non-iron-deficient women. The Cd level, but not the cotinine level, was positively correlated with transferrin sialylation in the blood of iron-deficient women who smoked; 3-, 4-, 5- and 6-sialoTf correlated negatively with fetal biometric parameters in the same group.\n\n\nCONCLUSION\nIt has been shown the relationship between Cd from tobacco smoking and fetal biometric parameters observed only in the iron deficient group suggests an additive effect of these two factors, and indicate that mothers with anemia may be more susceptible to Cd toxicity and disturbed fetal development.",
"title": ""
},
{
"docid": "2b0ec7b7b80c3b2653a04e17432c3180",
"text": "Traditional approaches to data mining are based on an assumption that the process that generated or is generating a data stream is static. Although this assumption holds for many applications, it does not hold for many others. Consider systems that build models for identifying important e-mail. Through interaction with and feedback from a user, such a system might determine that particular e-mail addresses and certain words of the subject are useful for predicting the importance of email. However, when the user or the persons sending email start other projects or take on additional responsibilities, what constitutes important e-mail will change. That is, the concept of important e-mail will change or drift. Such a system must be able to adapt its model or concept description in response to this change. Coping with or tracking concept drift is important for other applications, such as market-basket analysis, intrusion detection, and intelligent user interfaces, to name a few.",
"title": ""
},
{
"docid": "129a85f7e611459cf98dc7635b44fc56",
"text": "Pain in the oral and craniofacial system represents a major medical and social problem. Indeed, a U.S. Surgeon General’s report on orofacial health concludes that, ‘‘. . .oral health means much more than healthy teeth. It means being free of chronic oral-facial pain conditions. . .’’ [172]. Community-based surveys indicate that many subjects commonly report pain in the orofacial region, with estimates of >39 million, or 22% of Americans older than 18 years of age, in the United States alone [108]. Other population-based surveys conducted in the United Kingdom [111,112], Germany [91], or regional pain care centers in the United States [54] report similar occurrence rates [135]. Importantly, chronic widespread body pain, patient sex and age, and psychosocial factors appear to serve as risk factors for chronic orofacial pain [1,2,92,99,138]. In addition to its high degree of prevalence, the reported intensities of various orofacial pain conditions are similar to that observed with many spinal pain disorders (Fig. 1). Moreover, orofacial pain is derived from many unique target tissues, such as the meninges, cornea, tooth pulp, oral/ nasal mucosa, and temporomandibular joint (Fig. 2), and thus has several unique physiologic characteristics compared with the spinal nociceptive system [23]. Given these considerations, it is not surprising that accurate diagnosis and effective management of orofacial pain conditions represents a significant health care problem. Publications in the field of orofacial pain demonstrate a steady increase over the last several decades (Fig. 3). This is a complex literature; a recent bibliometric analysis of orofacial pain articles published in 2004–2005 indicated that 975 articles on orofacial pain were published in 275 journals from authors representing 54 countries [142]. Thus, orofacial pain disorders represent a complex constellation of conditions with an equally diverse literature base. Accordingly, this review will focus on a summary of major research foci on orofacial pain without attempting to provide a comprehensive review of the entire literature.",
"title": ""
},
{
"docid": "8204e456dfb8d0f8dc39a20166df9798",
"text": "A sketch combination system is introduced and tested: a crowd of 1047 participated in an iterative process of design, evaluation and combination. Specifically, participants in a crowdsourcing marketplace sketched chairs for children. One crowd created a first generation of chairs, and then successive crowds created new generations by combining the chairs made by previous crowds. Other participants evaluated the chairs. The crowd judged the chairs from the third generation more creative than those from the first generation. An analysis of the design evolution shows that participants inherited and modified presented features, and also added new features. These findings suggest that crowd based design processes may be effective, and point the way toward computer-human interactions that might further encourage crowd creativity.",
"title": ""
},
{
"docid": "05152e4d3b77ef7208f51f1820b6db29",
"text": "Current estimates of mobile data traffic in the years to come foresee a 1,000 increase of mobile data traffic in 2020 with respect to 2010, or, equivalently, a doubling of mobile data traffic every year. This unprecedented growth demands a significant increase of wireless network capacity. Even if the current evolution of fourth-generation (4G) systems and, in particular, the advancements of the long-term evolution (LTE) standardization process foresees a significant capacity improvement with respect to third-generation (3G) systems, the European Telecommunications Standards Institute (ETSI) has established a roadmap toward the fifth-generation (5G) system, with the aim of deploying a commercial system by the year 2020 [1]. The European Project named ?Mobile and Wireless Communications Enablers for the 2020 Information Society? (METIS), launched in 2012, represents one of the first international and large-scale research projects on fifth generation (5G) [2]. In parallel with this unparalleled growth of data traffic, our everyday life experience shows an increasing habit to run a plethora of applications specifically devised for mobile devices, (smartphones, tablets, laptops)for entertainment, health care, business, social networking, traveling, news, etc. However, the spectacular growth in wireless traffic generated by this lifestyle is not matched with a parallel improvement on mobile handsets? batteries, whose lifetime is not improving at the same pace [3]. This determines a widening gap between the energy required to run sophisticated applications and the energy available on the mobile handset. A possible way to overcome this obstacle is to enable the mobile devices, whenever possible and convenient, to offload their most energy-consuming tasks to nearby fixed servers. This strategy has been studied for a long time and is reported in the literature under different names, such as cyberforaging [4] or computation offloading [5], [6]. In recent years, a strong impulse to computation offloading has come through cloud computing (CC), which enables the users to utilize resources on demand. The resources made available by a cloud service provider are: 1) infrastructures, such as network devices, storage, servers, etc., 2) platforms, such as operating systems, offering an integrated environment for developing and testing custom applications, and 3) software, in the form of application programs. These three kinds of services are labeled, respectively, as infrastructure as a service, platform as a service, and software as a service. In particular, one of the key features of CC is virtualization, which makes it possible to run multiple operating systems and multiple applications over the same machine (or set of machines), while guaranteeing isolation and protection of the programs and their data. Through virtualization, the number of virtual machines (VMs) can scale on ?demand, thus improving the overall system computational efficiency. Mobile CC (MCC) is a specific case of CC where the user accesses the cloud services through a mobile handset [5]. The major limitations of today?s MCC are the energy consumption associated to the radio access and the latency experienced in reaching the cloud provider through a wide area network (WAN). Mobile users located at the edge of macrocellular networks are particularly disadvantaged in terms of power consumption and, furthermore, it is very difficult to control latency over a WAN. As pointed out in [7]?[9], humans are acutely sensitive to delay and jitter: as latency increases, interactive response suffers. Since the interaction times foreseen in 5G systems, in particular in the so-called tactile Internet [10], are quite small (in the order of milliseconds), a strict latency control must be somehow incorporated in near future MCC. Meeting this constraint requires a deep ?rethinking of the overall service chain, from the physical layer up to virtualization.",
"title": ""
},
{
"docid": "947fdb3233e57b5df8ce92df31f2a0be",
"text": "Recent work by Cohen et al. [1] has achieved state-of-the-art results for learning spherical images in a rotation invariant way by using ideas from group representation theory and noncommutative harmonic analysis. In this paper we propose a generalization of this work that generally exhibits improved performace, but from an implementation point of view is actually simpler. An unusual feature of the proposed architecture is that it uses the Clebsch–Gordan transform as its only source of nonlinearity, thus avoiding repeated forward and backward Fourier transforms. The underlying ideas of the paper generalize to constructing neural networks that are invariant to the action of other compact groups.",
"title": ""
}
] | scidocsrr |
62987e20e97911c7286ff5be3aae3f28 | Learning to Train a Binary Neural Network | [
{
"docid": "40a87654ac33c46f948204fd5c7ef4c1",
"text": "We introduce a novel scheme to train binary convolutional neural networks (CNNs) – CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.",
"title": ""
},
{
"docid": "b9aa1b23ee957f61337e731611a6301a",
"text": "We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFatNet opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 4-bit gradients to get 47% top-1 accuracy on ImageNet validation set.1 The DoReFa-Net AlexNet model is released publicly.",
"title": ""
}
] | [
{
"docid": "571009136d227f8df3b8caa125322b61",
"text": "Need an excellent electronic book? fuzzy graphs and fuzzy hypergraphs by , the most effective one! Wan na get it? Discover this outstanding e-book by here currently. Download and install or review online is readily available. Why we are the most effective website for downloading this fuzzy graphs and fuzzy hypergraphs Of course, you could select guide in numerous file types and media. Look for ppt, txt, pdf, word, rar, zip, and also kindle? Why not? Get them below, currently!",
"title": ""
},
{
"docid": "f7a6cc4ebc1d2657175301dc05c86a7b",
"text": "Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.",
"title": ""
},
{
"docid": "3fc3ea7bb6c5342bcbc9d046b0a2537f",
"text": "We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.",
"title": ""
},
{
"docid": "e13b4b92c639a5b697356466e00e05c3",
"text": "In fashion retailing, the display of product inventory at the store is important to capture consumers’ attention. Higher inventory levels might allow more attractive displays and thus increase sales, in addition to avoiding stock-outs. We develop a choice model where product demand is indeed affected by inventory, and controls for product and store heterogeneity, seasonality, promotions and potential unobservable shocks in each market. We empirically test the model with daily traffic, inventory and sales data from a large retailer, at the store-day-product level. We find that the impact of inventory level on sales is positive and highly significant, even in situations of extremely high service level. The magnitude of this effect is large: each 1% increase in product-level inventory at the store increases sales of 0.58% on average. This supports the idea that inventory has a strong role in helping customers choose a particular product within the assortment. We finally describe how a retailer should optimally decide its inventory levels within a category and describe the properties of the optimal solution. Applying such optimization to our data set yields consistent and significant revenue improvements, of more than 10% for any date and store compared to current practices. Submitted: April 6, 2016. Revised: May 17, 2017",
"title": ""
},
{
"docid": "838a79ec0376a23ac24a462a00d140dc",
"text": "Bounding the generalization error of learning algorithms has a long history, which yet falls short in explaining various generalization successes including those of deep learning. Two important difficulties are (i) exploiting the dependencies between the hypotheses, (ii) exploiting the dependence between the algorithm’s input and output. Progress on the first point was made with the chaining method, originating from the work of Kolmogorov, and used in the VC-dimension bound. More recently, progress on the second point was made with the mutual information method by Russo and Zou ’15. Yet, these two methods are currently disjoint. In this paper, we introduce a technique to combine chaining and mutual information methods, to obtain a generalization bound that is both algorithm-dependent and that exploits the dependencies between the hypotheses. We provide an example in which our bound significantly outperforms both the chaining and the mutual information bounds. As a corollary, we tighten Dudley’s inequality when the learning algorithm chooses its output from a small subset of hypotheses with high probability.",
"title": ""
},
{
"docid": "aaf81989a3d1081baff7aea34b0b97f1",
"text": "Two-dimensional contingency or co-occurrence tables arise frequently in important applications such as text, web-log and market-basket data analysis. A basic problem in contingency table analysis is co-clustering: simultaneous clustering of the rows and columns. A novel theoretical formulation views the contingency table as an empirical joint probability distribution of two discrete random variables and poses the co-clustering problem as an optimization problem in information theory---the optimal co-clustering maximizes the mutual information between the clustered random variables subject to constraints on the number of row and column clusters. We present an innovative co-clustering algorithm that monotonically increases the preserved mutual information by intertwining both the row and column clusterings at all stages. Using the practical example of simultaneous word-document clustering, we demonstrate that our algorithm works well in practice, especially in the presence of sparsity and high-dimensionality.",
"title": ""
},
{
"docid": "2b30506690acbae9240ef867e961bc6c",
"text": "Background Breast milk can turn pink with Serratia marcescens colonization, this bacterium has been associated with several diseases and even death. It is seen most commonly in the intensive care settings. Discoloration of the breast milk can lead to premature termination of nursing. We describe two cases of pink-colored breast milk in which S. marsescens was isolated from both the expressed breast milk. Antimicrobial treatment was administered to the mothers. Return to breastfeeding was successful in both the cases. Conclusions Pink breast milk is caused by S. marsescens colonization. In such cases,early recognition and treatment before the development of infection is recommended to return to breastfeeding.",
"title": ""
},
{
"docid": "5ed1f4c5f554a29de926f6d4980cda89",
"text": "Capsule Networks (CapsNet) are recently proposed multi-stage computational models specialized for entity representation and discovery in image data. CapsNet employs iterative routing that shapes how the information cascades through different levels of interpretations. In this work, we investigate i) how the routing affects the CapsNet model fitting, ii) how the representation by capsules helps discover global structures in data distribution and iii) how learned data representation adapts and generalizes to new tasks. Our investigation shows: i) routing operation determines the certainty with which one layer of capsules pass information to the layer above, and the appropriate level of certainty is related to the model fitness, ii) in a designed experiment using data with a known 2D structure, capsule representations allow more meaningful 2D manifold embedding than neurons in a standard CNN do and iii) compared to neurons of standard CNN, capsules of successive layers are less coupled and more adaptive to new data distribution.",
"title": ""
},
{
"docid": "631cd44345606641454e9353e071f2c5",
"text": "Microblogs are rich sources of information because they provide platforms for users to share their thoughts, news, information, activities, and so on. Twitter is one of the most popular microblogs. Twitter users often use hashtags to mark specific topics and to link them with related tweets. In this study, we investigate the relationship between the music listening behaviors of Twitter users and a popular music ranking service by comparing information extracted from tweets with music-related hashtags and the Billboard chart. We collect users' music listening behavior from Twitter using music-related hashtags (e.g., #nowplaying). We then build a predictive model to forecast the Billboard rankings and hit music. The results show that the numbers of daily tweets about a specific song and artist can be effectively used to predict Billboard rankings and hits. This research suggests that users' music listening behavior on Twitter is highly correlated with general music trends and could play an important role in understanding consumers' music consumption patterns. In addition, we believe that Twitter users' music listening behavior can be applied in the field of Music Information Retrieval (MIR).",
"title": ""
},
{
"docid": "d3214d24911a5e42855fd1a53516d30b",
"text": "This paper extends the face detection framework proposed by Viola and Jones 2001 to handle profile views and rotated faces. As in the work of Rowley et al 1998. and Schneiderman et al. 2000, we build different detectors for different views of the face. A decision tree is then trained to determine the viewpoint class (such as right profile or rotated 60 degrees) for a given window of the image being examined. This is similar to the approach of Rowley et al. 1998. The appropriate detector for that viewpoint can then be run instead of running all detectors on all windows. This technique yields good results and maintains the speed advantage of the Viola-Jones detector. Shown as a demo at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 18, 2003 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2003 201 Broadway, Cambridge, Massachusetts 02139 Publication History:– 1. First printing, TR2003-96, July 2003 Fast Multi-view Face Detection Michael J. Jones Paul Viola mjones@merl.com viola@microsoft.com Mitsubishi Electric Research Laboratory Microsoft Research 201 Broadway One Microsoft Way Cambridge, MA 02139 Redmond, WA 98052",
"title": ""
},
{
"docid": "c43785187ce3c4e7d1895b628f4a2df3",
"text": "In this paper we focus on the connection between age and language use, exploring age prediction of Twitter users based on their tweets. We discuss the construction of a fine-grained annotation effort to assign ages and life stages to Twitter users. Using this dataset, we explore age prediction in three different ways: classifying users into age categories, by life stages, and predicting their exact age. We find that an automatic system achieves better performance than humans on these tasks and that both humans and the automatic systems have difficulties predicting the age of older people. Moreover, we present a detailed analysis of variables that change with age. We find strong patterns of change, and that most changes occur at young ages.",
"title": ""
},
{
"docid": "1ec1fc8aabb8f7880bfa970ccbc45913",
"text": "Several isolates of Gram-positive, acidophilic, moderately thermophilic, ferrous-iron- and mineral-sulphide-oxidizing bacteria were examined to establish unequivocally the characteristics of Sulfobacillus-like bacteria. Two species were evident: Sulfobacillus thermosulfidooxidans with 48-50 mol% G+C and Sulfobacillus acidophilus sp. nov. with 55-57 mol% G+C. Both species grew autotrophically and mixotrophically on ferrous iron, on elemental sulphur in the presence of yeast extract, and heterotrophically on yeast extract. Autotrophic growth on sulphur was consistently obtained only with S. acidophilus.",
"title": ""
},
{
"docid": "a8aa8c24c794bc6187257d264e2586a0",
"text": "Bayesian optimization is a powerful framework for minimizing expensive objective functions while using very few function evaluations. It has been successfully applied to a variety of problems, including hyperparameter tuning and experimental design. However, this framework has not been extended to the inequality-constrained optimization setting, particularly the setting in which evaluating feasibility is just as expensive as evaluating the objective. Here we present constrained Bayesian optimization, which places a prior distribution on both the objective and the constraint functions. We evaluate our method on simulated and real data, demonstrating that constrained Bayesian optimization can quickly find optimal and feasible points, even when small feasible regions cause standard methods to fail.",
"title": ""
},
{
"docid": "dbcef163643232313207cd45402158de",
"text": "Every industry has significant data output as a product of their working process, and with the recent advent of big data mining and integrated data warehousing it is the case for a robust methodology for assessing the quality for sustainable and consistent processing. In this paper a review is conducted on Data Quality (DQ) in multiple domains in order to propose connections between their methodologies. This critical review suggests that within the process of DQ assessment of heterogeneous data sets, not often are they treated as separate types of data in need of an alternate data quality assessment framework. We discuss the need for such a directed DQ framework and the opportunities that are foreseen in this research area and propose to address it through degrees of heterogeneity.",
"title": ""
},
{
"docid": "03aec14861b2b1b4e6f091dc77913a5b",
"text": "Taxonomy is indispensable in understanding natural language. A variety of large scale, usage-based, data-driven lexical taxonomies have been constructed in recent years. Hypernym-hyponym relationship, which is considered as the backbone of lexical taxonomies can not only be used to categorize the data but also enables generalization. In particular, we focus on one of the most prominent properties of the hypernym-hyponym relationship, namely, transitivity, which has a significant implication for many applications. We show that, unlike human crafted ontologies and taxonomies, transitivity does not always hold in data-driven lexical taxonomies. We introduce a supervised approach to detect whether transitivity holds for any given pair of hypernym-hyponym relationships. Besides solving the inferencing problem, we also use the transitivity to derive new hypernym-hyponym relationships for data-driven lexical taxonomies. We conduct extensive experiments to show the effectiveness of our approach.",
"title": ""
},
{
"docid": "0bbfd07d0686fc563f156d75d3672c7b",
"text": "In this paper, we provide a comprehensive survey of the mixture of experts (ME). We discuss the fundamental models for regression and classification and also their training with the expectation-maximization algorithm. We follow the discussion with improvements to the ME model and focus particularly on the mixtures of Gaussian process experts. We provide a review of the literature for other training methods, such as the alternative localized ME training, and cover the variational learning of ME in detail. In addition, we describe the model selection literature which encompasses finding the optimum number of experts, as well as the depth of the tree. We present the advances in ME in the classification area and present some issues concerning the classification model. We list the statistical properties of ME, discuss how the model has been modified over the years, compare ME to some popular algorithms, and list several applications. We conclude our survey with future directions and provide a list of publicly available datasets and a list of publicly available software that implement ME. Finally, we provide examples for regression and classification. We believe that the study described in this paper will provide quick access to the relevant literature for researchers and practitioners who would like to improve or use ME, and that it will stimulate further studies in ME.",
"title": ""
},
{
"docid": "719458301e92f1c5141971ea8a21342b",
"text": "In the 65 years since its formal specification, information theory has become an established statistical paradigm, providing powerful tools for quantifying probabilistic relationships. Behavior analysis has begun to adopt these tools as a novel means of measuring the interrelations between behavior, stimuli, and contingent outcomes. This approach holds great promise for making more precise determinations about the causes of behavior and the forms in which conditioning may be encoded by organisms. In addition to providing an introduction to the basics of information theory, we review some of the ways that information theory has informed the studies of Pavlovian conditioning, operant conditioning, and behavioral neuroscience. In addition to enriching each of these empirical domains, information theory has the potential to act as a common statistical framework by which results from different domains may be integrated, compared, and ultimately unified.",
"title": ""
},
{
"docid": "72d0731d0fc4f32b116afa207c9aefdd",
"text": "Internet of Things (IoT) is based on a wireless network that connects a huge number of smart objects, products, smart devices, and people. It has another name which is Web of Things (WoT). IoT uses standards and protocols that are proposed by different standardization organizations in message passing within session layer. Most of the IoT applications protocols use TCP or UDP for transport. XMPP, CoAP, DDS, MQTT, and AMQP are grouped of the widely used application protocols. Each one of these protocols have specific functions and are used in specific way to handle some issues. This paper provides an overview for one of the most popular application layer protocols that is MQTT, including its architecture, message format, MQTT scope, and Quality of Service (QoS) for the MQTT levels. MQTT works mainly as a pipe for binary data and provides a flexibility in communication patterns. It is designed to provide a publish-subscribe messaging protocol with most possible minimal bandwidth requirements. MQTT uses Transmission Control Protocol (TCP) for transport. MQTT is an open standard, giving a mechanisms to asynchronous communication, have a range of implementations, and it is working on IP.",
"title": ""
},
{
"docid": "42d3f666325c3c9e2d61fcbad3c6659a",
"text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.",
"title": ""
},
{
"docid": "8a73a42bed30751cbb6798398b81571d",
"text": "In this paper, we study the problem of learning image classification models with label noise. Existing approaches depending on human supervision are generally not scalable as manually identifying correct or incorrect labels is time-consuming, whereas approaches not relying on human supervision are scalable but less effective. To reduce the amount of human supervision for label noise cleaning, we introduce CleanNet, a joint neural embedding network, which only requires a fraction of the classes being manually verified to provide the knowledge of label noise that can be transferred to other classes. We further integrate CleanNet and conventional convolutional neural network classifier into one framework for image classification learning. We demonstrate the effectiveness of the proposed algorithm on both of the label noise detection task and the image classification on noisy data task on several large-scale datasets. Experimental results show that CleanNet can reduce label noise detection error rate on held-out classes where no human supervision available by 41.5% compared to current weakly supervised methods. It also achieves 47% of the performance gain of verifying all images with only 3.2% images verified on an image classification task. Source code and dataset will be available at kuanghuei.github.io/CleanNetProject.",
"title": ""
}
] | scidocsrr |
bf72d8872ff34c83d6d1694d0b2f705a | Anger Is More Influential than Joy: Sentiment Correlation in Weibo | [
{
"docid": "01f31507360e1a675a1a76d8a3dbf9f2",
"text": "Event detection from tweets is an important task to understand the current events/topics attracting a large number of common users. However, the unique characteristics of tweets (e.g. short and noisy content, diverse and fast changing topics, and large data volume) make event detection a challenging task. Most existing techniques proposed for well written documents (e.g. news articles) cannot be directly adopted. In this paper, we propose a segment-based event detection system for tweets, called Twevent. Twevent first detects bursty tweet segments as event segments and then clusters the event segments into events considering both their frequency distribution and content similarity. More specifically, each tweet is split into non-overlapping segments (i.e. phrases possibly refer to named entities or semantically meaningful information units). The bursty segments are identified within a fixed time window based on their frequency patterns, and each bursty segment is described by the set of tweets containing the segment published within that time window. The similarity between a pair of bursty segments is computed using their associated tweets. After clustering bursty segments into candidate events, Wikipedia is exploited to identify the realistic events and to derive the most newsworthy segments to describe the identified events. We evaluate Twevent and compare it with the state-of-the-art method using 4.3 million tweets published by Singapore-based users in June 2010. In our experiments, Twevent outperforms the state-of-the-art method by a large margin in terms of both precision and recall. More importantly, the events detected by Twevent can be easily interpreted with little background knowledge because of the newsworthy segments. We also show that Twevent is efficient and scalable, leading to a desirable solution for event detection from tweets.",
"title": ""
},
{
"docid": "3c73a3a8783dcc20274ce36e60d6eb35",
"text": "Recent years have witnessed the explosive growth of online social media. Weibo, a Twitter-like online social network in China, has attracted more than 300 million users in less than three years, with more than 1000 tweets generated in every second. These tweets not only convey the factual information, but also reflect the emotional states of the authors, which are very important for understanding user behaviors. However, a tweet in Weibo is extremely short and the words it contains evolve extraordinarily fast. Moreover, the Chinese corpus of sentiments is still very small, which prevents the conventional keyword-based methods from being used. In light of this, we build a system called MoodLens, which to our best knowledge is the first system for sentiment analysis of Chinese tweets in Weibo. In MoodLens, 95 emoticons are mapped into four categories of sentiments, i.e. angry, disgusting, joyful, and sad, which serve as the class labels of tweets. We then collect over 3.5 million labeled tweets as the corpus and train a fast Naive Bayes classifier, with an empirical precision of 64.3%. MoodLens also implements an incremental learning method to tackle the problem of the sentiment shift and the generation of new words. Using MoodLens for real-time tweets obtained from Weibo, several interesting temporal and spatial patterns are observed. Also, sentiment variations are well captured by MoodLens to effectively detect abnormal events in China. Finally, by using the highly efficient Naive Bayes classifier, MoodLens is capable of online real-time sentiment monitoring. The demo of MoodLens can be found at http://goo.gl/8DQ65.",
"title": ""
}
] | [
{
"docid": "8c065f91d367b738c57c10d79f43618f",
"text": "Conversational agents aim to offer an alternative to traditional methods for humans to engage with technology. This can mean to reduce the effort to complete a task using reasoning capabilities and by exploiting context, or allow voice interaction when traditional methods are not available or inconvenient. This paper introduces Foodie Fooderson, a conversational kitchen assistant built using IBM Watson technology. The aim of Foodie is to minimize food wastage by optimizing the use of groceries and assist families in improving their eating habits through recipe recommendations taking into account personal context, such as allergies and dietary goals, while helping reduce food waste and managing grocery budgets. This paper discusses Foodie’s architecture, use and benefits. Foodie uses services from CAPRecipes—our context-aware personalized recipe recommender system, SmarterContext—our personal context management system, and selected publicly available nutrition databases. Foodie reasons using IBM Watson’s conversational services to recognize users’ intents and understand events related to the users and their context. We also discuss our experiences in building conversational agents with Watson, including desired features that may improve the development experience with Watson for creating rich conversations in this exciting era of cognitive computing.",
"title": ""
},
{
"docid": "5515d0471e3647f090985690d85a017c",
"text": "In this paper, we examine the state of the art in augmented reality (AR) for mobile learning. Previous work in the field of mobile learning has included AR as a component of a wider toolkit for mobile learning but, to date, little has been done that discusses the phenomenon in detail or that examines its potential for learning, in a balanced fashion that identifies both positive and negative aspects of AR. We seek to provide a working definition of AR and examine how it is embedded within situated learning in outdoor settings. We also attempt to classify AR according to several key aspects (device/technology; mode of interaction; type of media involved; personal or shared experiences; if the experience is portable or static; and the learning activities/outcomes). We discuss the technical and pedagogical challenges presented by AR before looking at ways in which AR can be used for learning. Lastly, the paper looks ahead to what AR technologies may be on the horizon in the near future.",
"title": ""
},
{
"docid": "46dc94fe4ba164ccf1cb37810112883f",
"text": "The purpose of the study was to test four predictions derived from evolutionary (sexual strategies) theory. The central hypothesis was that men and women possess different emotional mechanisms that motivate and evaluate sexual activities. Consequently, even when women express indifference to emotional involvement and commitment and voluntarily engage in casual sexual relations, their goals, their feelings about the experience, and the associations between their sexual behavior and prospects for long-term investment differ significantly from those of men. Women's sexual behavior is associated with their perception of investment potential: long-term, short-term, and partners' ability and willingness to invest. For men,these associations are weaker or inversed. Regression analyses of survey data from 333 male and 363 female college students revealed the following: Greater permissiveness of sexual attitudes was positively associated with number of sex partners; this association was not moderated by sex of subject (Prediction 1); even when women deliberately engaged in casual sexual relations, thoughts that expressed worry and vulnerability crossed their minds; for females, greater number of partners was associated with increased worry-vulnerability whereas for males the trend was the opposite (Prediction 2); with increasing numbers of sex partners, marital thoughts decreased; this finding was not moderated by sex of subject; this finding did not support Prediction 3; for both males and females, greater number of partners was related to larger numbers of one-night stands, partners foreseen in the next 5 years, and deliberately casual sexual relations. This trend was significantly stronger for males than for females (Prediction 4).",
"title": ""
},
{
"docid": "558533fe6149adc6b506153e657b0ba2",
"text": "Graphical modelling of various aspects of software and systems is a common part of software development. UML is the de-facto standard for various types of software models. To be able to research UML, academia needs to have a corpus of UML models. For building such a database, an automated system that has the ability to classify UML class diagram images would be very beneficial, since a large portion of UML class diagrams (UML CDs) is available as images on the Internet. In this study, we propose 23 image-features and investigate the use of these features for the purpose of classifying UML CD images. We analyse the performance of the features and assess their contribution based on their Information Gain Attribute Evaluation scores. We study specificity and sensitivity scores of six classification algorithms on a set of 1300 images. We found that 19 out of 23 introduced features can be considered as influential predictors for classifying UML CD images. Through the six algorithms, the prediction rate achieves nearly 96% correctness for UML-CD and 91% of correctness for non-UML CD.",
"title": ""
},
{
"docid": "1eb6514f825be9d6a4af9646b6a7a9e2",
"text": "Maritime tasks, such as surveillance and patrolling, aquaculture inspection, and wildlife monitoring, typically require large operational crews and expensive equipment. Only recently have unmanned vehicles started to be used for such missions. These vehicles, however, tend to be expensive and have limited coverage, which prevents large-scale deployment. In this paper, we propose a scalable robotics system based on swarms of small and inexpensive aquatic drones. We take advantage of bio-inspired artificial evolution techniques in order to synthesize scalable and robust collective behaviors for the drones. The behaviors are then combined hierarchically with preprogrammed control in an engineeredcentric approach, allowing the overall behavior for a particular mission to be quickly configured and tested in simulation before the aquatic drones are deployed. We demonstrate the scalability of our hybrid approach by successfully deploying up to 1,000 simulated drones to patrol a 20 km long strip for 24 hours.",
"title": ""
},
{
"docid": "8fb459173427fb0592b1c2d3d85cb092",
"text": "Establishing dense correspondences between multiple images is a fundamental task in many applications. However, finding a reliable correspondence between multi-modal or multi-spectral images still remains unsolved due to their challenging photometric and geometric variations. In this paper, we propose a novel dense descriptor, called dense adaptive self-correlation (DASC), to estimate dense multi-modal and multi-spectral correspondences. Based on an observation that self-similarity existing within images is robust to imaging modality variations, we define the descriptor with a series of an adaptive self-correlation similarity measure between patches sampled by a randomized receptive field pooling, in which a sampling pattern is obtained using a discriminative learning. The computational redundancy of dense descriptors is dramatically reduced by applying fast edge-aware filtering. Furthermore, in order to address geometric variations including scale and rotation, we propose a geometry-invariant DASC (GI-DASC) descriptor that effectively leverages the DASC through a superpixel-based representation. For a quantitative evaluation of the GI-DASC, we build a novel multi-modal benchmark as varying photometric and geometric conditions. Experimental results demonstrate the outstanding performance of the DASC and GI-DASC in many cases of dense multi-modal and multi-spectral correspondences.",
"title": ""
},
{
"docid": "63435412232daf75eebd8ed973cb5334",
"text": "With recent advances in devices, middleware, applications and networking infrastructure, the wireless Internet is becoming a reality. We believe that some of the major drivers of the wireless Internet will be emerging mobile applications such as mobile commerce. Although many of these are futuristic, some applications including user-and location-specific mobile advertising, location-based services, and mobile financial services are beginning to be commercialized. Mobile commerce applications present several interesting and complex challenges including location management of products, services, devices, and people. Further, these applications have fairly diverse requirements from the underlying wireless infrastructure in terms of location accuracy, response time, multicast support, transaction frequency and duration, and dependability. Therefore, research is necessary to address these important and complex challenges. In this article, we present an integrated location management architecture to support the diverse location requirements of m-commerce applications. The proposed architecture is capable of supporting a range of location accuracies, wider network coverage, wireless multicast, and infrastructure dependability for m-commerce applications. The proposed architecture can also support several emerging mobile applications. Additionally, several interesting research problems and directions in location management for wireless Internet applications are presented and discussed.",
"title": ""
},
{
"docid": "4d8b5461b6c6422e7436e86b16ca461c",
"text": "BACKGROUND\nThe loss of muscle mass is considered to be a major determinant of strength loss in aging. However, large-scale longitudinal studies examining the association between the loss of mass and strength in older adults are lacking.\n\n\nMETHODS\nThree-year changes in muscle mass and strength were determined in 1880 older adults in the Health, Aging and Body Composition Study. Knee extensor strength was measured by isokinetic dynamometry. Whole body and appendicular lean and fat mass were assessed by dual-energy x-ray absorptiometry and computed tomography.\n\n\nRESULTS\nBoth men and women lost strength, with men losing almost twice as much strength as women. Blacks lost about 28% more strength than did whites. Annualized rates of leg strength decline (3.4% in white men, 4.1% in black men, 2.6% in white women, and 3.0% in black women) were about three times greater than the rates of loss of leg lean mass ( approximately 1% per year). The loss of lean mass, as well as higher baseline strength, lower baseline leg lean mass, and older age, was independently associated with strength decline in both men and women. However, gain of lean mass was not accompanied by strength maintenance or gain (ss coefficients; men, -0.48 +/- 4.61, p =.92, women, -1.68 +/- 3.57, p =.64).\n\n\nCONCLUSIONS\nAlthough the loss of muscle mass is associated with the decline in strength in older adults, this strength decline is much more rapid than the concomitant loss of muscle mass, suggesting a decline in muscle quality. Moreover, maintaining or gaining muscle mass does not prevent aging-associated declines in muscle strength.",
"title": ""
},
{
"docid": "703174754f25cea5a9f1e1e7f1988b76",
"text": "This study proposes a data-driven approach to phone set construction for code-switching automatic speech recognition (ASR). Acoustic and context-dependent cross-lingual articulatory features (AFs) are incorporated into the estimation of the distance between triphone units for constructing a Chinese-English phone set. The acoustic features of each triphone in the training corpus are extracted for constructing an acoustic triphone HMM. Furthermore, the articulatory features of the \"last/first\" state of the corresponding preceding/succeeding triphone in the training corpus are used to construct an AF-based GMM. The AFs, extracted using a deep neural network (DNN), are used for code-switching articulation modeling to alleviate the data sparseness problem due to the diverse context-dependent phone combinations in intra-sentential code-switching. The triphones are then clustered to obtain a Chinese-English phone set based on the acoustic HMMs and the AF-based GMMs using a hierarchical triphone clustering algorithm. Experimental results on code-switching ASR show that the proposed method for phone set construction outperformed other traditional methods.",
"title": ""
},
{
"docid": "21870abb7943b1b26c844bff1685da1c",
"text": "Many robots capable of performing social behaviors have recently been developed for Human-Robot Interaction (HRI) studies. These social robots are applied in various domains such as education, entertainment, medicine, and collaboration. Besides the undisputed advantages, a major difficulty in HRI studies with social robots is that the robot platforms are typically expensive and/or not open-source. It burdens researchers to broaden experiments to a larger scale or apply study results in practice. This paper describes a method to modify My Keepon, a toy version of Keepon robot, to be a programmable platform for HRI studies, especially for robot-assisted therapies. With an Arduino microcontroller board and an open-source Microsoft Visual C# software, users are able to fully control the sounds and motions of My Keepon, and configure the robot to the needs of their research. Peripherals can be added for advanced studies (e.g., mouse, keyboard, buttons, PlayStation2 console, Emotiv neuroheadset, Kinect). Our psychological experiment results show that My Keepon modification is a useful and low-cost platform for several HRI studies.",
"title": ""
},
{
"docid": "e8b5d86ad69c34683f9c4a46a8a2d908",
"text": "This document compares global bundle adjustment via the ubiquitous Levenberg-Marquardt to a Kalman filter to estimate parameters of a homographic transformation between two or more images starting from bad initial conditions. We show that the filtering technique outperforms sparse bundle adjustment in terms of projection error and computational costs. The techniques are tested on real world images of an indoor and outdoor scene.",
"title": ""
},
{
"docid": "a31f26b4c937805a800e33e7986ee929",
"text": "In this paper, we propose a novel shape interpolation approach based on Poisson equation. We formulate the trajectory problem of shape interpolation as solving Poisson equations defined on a domain mesh. A non-linear gradient field interpolation method is proposed to take both vertex coordinates and surface orientation into account. With proper boundary conditions, the in-between shapes are reconstructed implicitly from the interpolated gradient fields, while traditional methods usually manipulate vertex coordinates directly. Besides of global shape interpolation, our method is also applicable to local shape interpolation, and can be further enhanced by incorporating with deformation. Our approach can generate visual pleasing and physical plausible morphing sequences with stable area and volume changes. Experimental results demonstrate that our technique can avoid the shrinkage problem appeared in linear shape interpolation.",
"title": ""
},
{
"docid": "fbcdb57ae0d42e9665bc95dbbca0d57b",
"text": "Data classification and tag recommendation are both important and challenging tasks in social media. These two tasks are often considered independently and most efforts have been made to tackle them separately. However, labels in data classification and tags in tag recommendation are inherently related. For example, a Youtube video annotated with NCAA, stadium, pac12 is likely to be labeled as football, while a video/image with the class label of coast is likely to be tagged with beach, sea, water and sand. The existence of relations between labels and tags motivates us to jointly perform classification and tag recommendation for social media data in this paper. In particular, we provide a principled way to capture the relations between labels and tags, and propose a novel framework CLARE, which fuses data CLAssification and tag REcommendation into a coherent model. With experiments on three social media datasets, we demonstrate that the proposed framework CLARE achieves superior performance on both tasks compared to the state-of-the-art methods.",
"title": ""
},
{
"docid": "a259b83f74b76401334a544c1fa2192d",
"text": "Pan-sharpening is a fundamental and significant task in the field of remote sensing imagery processing, in which high-resolution spatial details from panchromatic images are employed to enhance the spatial resolution of multispectral (MS) images. As the transformation from low spatial resolution MS image to high-resolution MS image is complex and highly nonlinear, inspired by the powerful representation for nonlinear relationships of deep neural networks, we introduce multiscale feature extraction and residual learning into the basic convolutional neural network (CNN) architecture and propose the multiscale and multidepth CNN for the pan-sharpening of remote sensing imagery. Both the quantitative assessment results and the visual assessment confirm that the proposed network yields high-resolution MS images that are superior to the images produced by the compared state-of-the-art methods.",
"title": ""
},
{
"docid": "ceeb8b559c372a45d63fc5acd5b47613",
"text": "Noninvasive body contouring is the fastest growing area of cosmetic dermatology. It entails the use of specific technology to optimize the definition, smoothness, and shape of the human body in a safe and effective manner. There are currently 4 leading modalities used for noninvasive body contouring: cryolipolysis, radiofrequency, high-intensity focused ultrasound, and laser therapy. This article provides an overview of each modality.",
"title": ""
},
{
"docid": "d99302511e2eb17ce875d480d1bb78fc",
"text": "Emojis allow us to describe objects, situations and even feelings with small images, providing a visual and quick way to communicate. In this paper, we analyse emojis used in Twitter with distributional semantic models. We retrieve 10 millions tweets posted by USA users, and we build several skip gram word embedding models by mapping in the same vectorial space both words and emojis. We test our models with semantic similarity experiments, comparing the output of our models with human assessment. We also carry out an exhaustive qualitative evaluation, showing interesting results.",
"title": ""
},
{
"docid": "f02c2720bb61cb916643ca9708910c77",
"text": "This paper presents the NLP-TEA 2016 shared task for Chinese grammatical error diagnosis which seeks to identify grammatical error types and their range of occurrence within sentences written by learners of Chinese as foreign language. We describe the task definition, data preparation, performance metrics, and evaluation results. Of the 15 teams registered for this shared task, 9 teams developed the system and submitted a total of 36 runs. We expected this evaluation campaign could lead to the development of more advanced NLP techniques for educational applications, especially for Chinese error detection. All data sets with gold standards and scoring scripts are made publicly available to researchers.",
"title": ""
},
{
"docid": "b58793f6bce670efefe34bd0a1f29898",
"text": "Cell signaling networks coordinate specific patterns of protein expression in response to external cues, yet the logic by which signaling pathway activity determines the eventual abundance of target proteins is complex and poorly understood. Here, we describe an approach for simultaneously controlling the Ras/Erk pathway and monitoring a target gene's transcription and protein accumulation in single live cells. We apply our approach to dissect how Erk activity is decoded by immediate early genes (IEGs). We find that IEG transcription decodes Erk dynamics through a shared band-pass filtering circuit; repeated Erk pulses transcribe IEGs more efficiently than sustained Erk inputs. However, despite highly similar transcriptional responses, each IEG exhibits dramatically different protein-level accumulation, demonstrating a high degree of post-transcriptional regulation by combinations of multiple pathways. Our results demonstrate that the Ras/Erk pathway is decoded by both dynamic filters and logic gates to shape target gene responses in a context-specific manner.",
"title": ""
},
{
"docid": "58c25a0a600b7e59de5a85cb2b7faea9",
"text": "The increasing level of integration in electronic devices requires high density package substrates with good electrical and thermal performance, and high reliability. Organic laminate substrates have been serving these requirements with their continuous improvements in terms of the material characteristics and fabrication process to realize multi-layer fine pattern interconnects and small form factor. We present the advanced coreless laminate substrates in this paper including 3-layer thin substrate built by ETS (Embedded Trace Substrate) technology, 3-layer SUTC (Simmtech Ultra-Thin substrate with Carrier) for fan-out chip last package, and 3-layer coreless substrate with HSR (High modulus Solder Resist) for reduced warpage. We also present new coreless substrates up to 10 layers and substrate based on EMC. These new laminate substrates are used in many different applications such as application processors, memory, CMOS image sensors, touch screen controllers, MEMS, and RF SIP(System in Package) for over 70GHz applications. One common challenge for all these substrates is to minimize the warpage. The analysis and simulation techniques for the warpage control are presented.",
"title": ""
}
] | scidocsrr |
ff57c158d0058d8f5b16f4049ec0210d | Supply Chain Contracting Under Competition : Bilateral Bargaining vs . Stackelberg | [
{
"docid": "6559d77de48d153153ce77b0e2969793",
"text": "1 This paper is an invited chapter to be published in the Handbooks in Operations Research and Management Science: Supply Chain Management, edited by Steve Graves and Ton de Kok and published by North-Holland. I would like to thank the many people that carefully read and commented on the ...rst draft of this manuscript: Ravi Anupindi, Fangruo Chen, Charles Corbett, James Dana, Ananth Iyer, Ton de Kok, Yigal Gerchak, Mark Ferguson, Marty Lariviere, Serguei Netessine, Ediel Pinker, Nils Rudi, Sridhar Seshadri, Terry Taylor and Kevin Weng. I am, of course, responsible for all remaining errors. Comments, of course, are still quite welcomed.",
"title": ""
}
] | [
{
"docid": "d0c5d24a5f68eb5448b45feeca098b87",
"text": "Age estimation has wide applications in video surveillance, social networking, and human-computer interaction. Many of the published approaches simply treat age estimation as an exact age regression problem, and thus do not leverage a distribution's robustness in representing labels with ambiguity such as ages. In this paper, we propose a new loss function, called mean-variance loss, for robust age estimation via distribution learning. Specifically, the mean-variance loss consists of a mean loss, which penalizes difference between the mean of the estimated age distribution and the ground-truth age, and a variance loss, which penalizes the variance of the estimated age distribution to ensure a concentrated distribution. The proposed mean-variance loss and softmax loss are jointly embedded into Convolutional Neural Networks (CNNs) for age estimation. Experimental results on the FG-NET, MORPH Album II, CLAP2016, and AADB databases show that the proposed approach outperforms the state-of-the-art age estimation methods by a large margin, and generalizes well to image aesthetics assessment.",
"title": ""
},
{
"docid": "211b858db72c962efaedf66f2ed9479d",
"text": "Along with the rapid development of information and communication technologies, educators are trying to keep up with the dramatic changes in our electronic environment. These days mobile technology, with popular devices such as iPhones, Android phones, and iPads, is steering our learning environment towards increasingly focusing on mobile learning or m-Learning. Currently, most interfaces employ keyboards, mouse or touch technology, but some emerging input-interfaces use voiceor marker-based gesture recognition. In the future, one of the cutting-edge technologies likely to be used is robotics. Robots are already being used in some classrooms and are receiving an increasing level of attention. Robots today are developed for special purposes, quite similar to personal computers in their early days. However, in the future, when mass production lowers prices, robots will bring about big changes in our society. In this column, the author focuses on educational service robots. Educational service robots for language learning and robot-assisted language learning (RALL) will be introduced, and the hardware and software platforms for RALL will be explored, as well as implications for future research.",
"title": ""
},
{
"docid": "0241cef84d46b942ee32fc7345874b90",
"text": "A total of eight appendices (Appendix 1 through Appendix 8) and an associated reference for these appendices have been placed here. In addition, there is currently a search engine located at to assist users in identifying BPR techniques and tools.",
"title": ""
},
{
"docid": "f3f4cb6e7e33f54fca58c14ce82d6b46",
"text": "In this letter, a novel slot array antenna with a substrate-integrated coaxial line (SICL) technique is proposed. The proposed antenna has radiation slots etched homolaterally along the mean line in the top metallic layer of SICL and achieves a compact transverse dimension. A prototype with 5 <inline-formula><tex-math notation=\"LaTeX\">$\\times$ </tex-math></inline-formula> 10 longitudinal slots is designed and fabricated with a multilayer liquid crystal polymer (LCP) process. A maximum gain of 15.0 dBi is measured at 35.25 GHz with sidelobe levels of <inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math></inline-formula> 28.2 dB (<italic>E</italic>-plane) and <inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math></inline-formula> 33.1 dB (<italic>H</italic>-plane). The close correspondence between experimental results and designed predictions on radiation patterns has validated the proposed excogitation in the end.",
"title": ""
},
{
"docid": "dea6ad0e1985260dbe7b70cef1c5da54",
"text": "The commonest mitochondrial diseases are probably those impairing the function of complex I of the respiratory electron transport chain. Such complex I impairment may contribute to various neurodegenerative disorders e.g. Parkinson's disease. In the following, using hepatocytes as a model cell, we have shown for the first time that the cytotoxicity caused by complex I inhibition by rotenone but not that caused by complex III inhibition by antimycin can be prevented by coenzyme Q (CoQ1) or menadione. Furthermore, complex I inhibitor cytotoxicity was associated with the collapse of the mitochondrial membrane potential and reactive oxygen species (ROS) formation. ROS scavengers or inhibitors of the mitochondrial permeability transition prevented cytotoxicity. The CoQ1 cytoprotective mechanism required CoQ1 reduction by DT-diaphorase (NQO1). Furthermore, the mitochondrial membrane potential and ATP levels were restored at low CoQ1 concentrations (5 microM). This suggests that the CoQ1H2 formed by NQO1 reduced complex III and acted as an electron bypass of the rotenone block. However cytoprotection still occurred at higher CoQ1 concentrations (>10 microM), which were less effective at restoring ATP levels but readily restored the cellular cytosolic redox potential (i.e. lactate: pyruvate ratio) and prevented ROS formation. This suggests that CoQ1 or menadione cytoprotection also involves the NQO1 catalysed reoxidation of NADH that accumulates as a result of complex I inhibition. The CoQ1H2 formed would then also act as a ROS scavenger.",
"title": ""
},
{
"docid": "579536fe3f52f4ed244f06210a9c2cd1",
"text": "OBJECTIVE\nThis review integrates recent advances in attachment theory, affective neuroscience, developmental stress research, and infant psychiatry in order to delineate the developmental precursors of posttraumatic stress disorder.\n\n\nMETHOD\nExisting attachment, stress physiology, trauma, and neuroscience literatures were collected using Index Medicus/Medline and Psychological Abstracts. This converging interdisciplinary data was used as a theoretical base for modelling the effects of early relational trauma on the developing central and autonomic nervous system activities that drive attachment functions.\n\n\nRESULTS\nCurrent trends that integrate neuropsychiatry, infant psychiatry, and clinical psychiatry are generating more powerful models of the early genesis of a predisposition to psychiatric disorders, including PTSD. Data are presented which suggest that traumatic attachments, expressed in episodes of hyperarousal and dissociation, are imprinted into the developing limbic and autonomic nervous systems of the early maturing right brain. These enduring structural changes lead to the inefficient stress coping mechanisms that lie at the core of infant, child, and adult posttraumatic stress disorders.\n\n\nCONCLUSIONS\nDisorganised-disoriented insecure attachment, a pattern common in infants abused in the first 2 years of life, is psychologically manifest as an inability to generate a coherent strategy for coping with relational stress. Early abuse negatively impacts the developmental trajectory of the right brain, dominant for attachment, affect regulation, and stress modulation, thereby setting a template for the coping deficits of both mind and body that characterise PTSD symptomatology. These data suggest that early intervention programs can significantly alter the intergenerational transmission of posttraumatic stress disorders.",
"title": ""
},
{
"docid": "793d41551a918a113f52481ff3df087e",
"text": "In this paper, we propose a novel deep captioning framework called Attention-based multimodal recurrent neural network with Visual Concept Transfer Mechanism (A-VCTM). There are three advantages of the proposed A-VCTM. (1) A multimodal layer is used to integrate the visual representation and context representation together, building a bridge that connects context information with visual information directly. (2) An attention mechanism is introduced to lead the model to focus on the regions corresponding to the next word to be generated (3) We propose a visual concept transfer mechanism to generate novel visual concepts and enrich the description sentences. Qualitative and quantitative results on two standard benchmarks, MSCOCO and Flickr30K show the effectiveness and practicability of the proposed A-VCTM framework.",
"title": ""
},
{
"docid": "ba75caedb1c9e65f14c2764157682bdf",
"text": "Data augmentation is usually adopted to increase the amount of training data, prevent overfitting and improve the performance of deep models. However, in practice, the effect of regular data augmentation, such as random image crop, is limited since it might introduce much uncontrolled background noise. In this paper, we propose WeaklySupervised Data Augmentation Network (WS-DAN) to explore the potential of data augmentation. Specifically, for each training image, we first generate attention maps to represent the object’s discriminative parts by weakly supervised Learning. Next, we randomly choose one attention map to augment this image, including attention crop and attention drop. Weakly-supervised data augmentation network improves the classification accuracy in two folds. On the one hand, images can be seen better since multiple object parts can be activated. On the other hand, attention regions provide spatial information of objects, which can make images be looked closer to further improve the performance. Comprehensive experiments in common fine-grained visual classification datasets show that our method surpasses the state-of-the-art methods by a large margin, which demonstrated the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "5ca75490c015685a1fc670b2ee5103ff",
"text": "The motion of the hand is the result of a complex interaction of extrinsic and intrinsic muscles of the forearm and hand. Whereas the origin of the extrinsic hand muscles is mainly located in the forearm, the origin (and insertion) of the intrinsic muscles is located within the hand itself. The intrinsic muscles of the hand include the lumbrical muscles I to IV, the dorsal and palmar interosseous muscles, the muscles of the thenar eminence (the flexor pollicis brevis, the abductor pollicis brevis, the adductor pollicis, and the opponens pollicis), as well as the hypothenar muscles (the abductor digiti minimi, flexor digiti minimi, and opponens digiti minimi). The thenar muscles control the motion of the thumb, and the hypothenar muscles control the motion of the little finger.1,2 The intrinsic muscles of the hand have not received much attention in the radiologic literature, despite their importance in moving the hand.3–7 Prospective studies on magnetic resonance (MR) imaging of the intrinsic muscles of the hand are rare, especially with a focus on new imaging techniques.6–8 However, similar to the other skeletal muscles, the intrinsic muscles of the hand can be affected by many conditions with resultant alterations in MR signal intensity ormorphology (e.g., with congenital abnormalities, inflammation, infection, trauma, neurologic disorders, and neoplastic conditions).1,9–12 MR imaging plays an important role in the evaluation of skeletal muscle disorders. Considered the most reliable diagnostic imaging tool, it can show subtle changes of signal and morphology, allow reliable detection and documentation of abnormalities, as well as provide a clear baseline for follow-up studies.13 It is also observer independent and allows second-opinion evaluation that is sometimes necessary, for example before a multidisciplinary discussion. Few studies exist on the clinical impact of MR imaging of the intrinsic muscles of the hand. A study by Andreisek et al in 19 patients with clinically evident or suspected intrinsic hand muscle abnormalities showed that MR imaging of the hand is useful and correlates well with clinical findings in patients with posttraumatic syndromes, peripheral neuropathies, myositis, and tumorous lesions, as well as congenital abnormalities.14,15 Because there is sparse literature on the intrinsic muscles of the hand, this review article offers a comprehensive review of muscle function and anatomy, describes normal MR imaging anatomy, and shows a spectrum of abnormal imaging findings.",
"title": ""
},
{
"docid": "c3b6d46a9e1490c720056682328586d5",
"text": "BACKGROUND\nBirth preparedness and complication preparedness (BPACR) is a key component of globally accepted safe motherhood programs, which helps ensure women to reach professional delivery care when labor begins and to reduce delays that occur when mothers in labor experience obstetric complications.\n\n\nOBJECTIVE\nThis study was conducted to assess practice and factors associated with BPACR among pregnant women in Aleta Wondo district in Sidama Zone, South Ethiopia.\n\n\nMETHODS\nA community based cross sectional study was conducted in 2007, on a sample of 812 pregnant women. Data were collected using pre-tested and structured questionnaire. The collected data were analyzed by SPSS for windows version 12.0.1. The women were asked whether they followed the desired five steps while pregnant: identified a trained birth attendant, identified a health facility, arranged for transport, identified blood donor and saved money for emergency. Taking at least two steps was considered being well-prepared.\n\n\nRESULTS\nAmong 743 pregnant women only a quarter (20.5%) of pregnant women identified skilled provider. Only 8.1% identified health facility for delivery and/or for obstetric emergencies. Preparedness for transportation was found to be very low (7.7%). Considerable (34.5%) number of families saved money for incurred costs of delivery and emergency if needed. Only few (2.3%) identified potential blood donor in case of emergency. Majority (87.9%) of the respondents reported that they intended to deliver at home, and only 60(8%) planned to deliver at health facilities. Overall only 17% of pregnant women were well prepared. The adjusted multivariate model showed that significant predictors for being well-prepared were maternal availing of antenatal services (OR = 1.91 95% CI; 1.21-3.01) and being pregnant for the first time (OR = 6.82, 95% CI; 1.27-36.55).\n\n\nCONCLUSION\nBPACR practice in the study area was found to be low. Effort to increase BPACR should focus on availing antenatal care services.",
"title": ""
},
{
"docid": "d8b2294b650274fc0269545296504432",
"text": "The multidisciplinary nature of information privacy research poses great challenges, since many concepts of information privacy have only been considered and developed through the lens of a particular discipline. It was our goal to conduct a multidisciplinary literature review. Following the three-stage approach proposed by Webster and Watson (2002), our methodology for identifying information privacy publications proceeded in three stages.",
"title": ""
},
{
"docid": "52ebf28afd8ae56816fb81c19e8890b6",
"text": "In this paper we aim to model the relationship between the text of a political blog post and the comment volume—that is, the total amount of response—that a post will receive. We seek to accurately identify which posts will attract a high-volume response, and also to gain insight about the community of readers and their interests. We design and evaluate variations on a latentvariable topic model that links text to comment volume. Introduction What makes a blog post noteworthy? One measure of the popularity or breadth of interest of a blog post is the extent to which readers of the blog are inspired to leave comments on the post. In this paper, we study the relationship between the text contents of a blog post and the volume of response it will receive from blog readers. Modeling this relationship has the potential to reveal the interests of a blog’s readership community to its authors, readers, advertisers, and scientists studying the blogosphere, but it may also be useful in improving technologies for blog search, recommendation, summarization, and so on. There are many ways to define “popularity” in blogging. In this study, we focus exclusively on the aggregate volume of comments. Commenting is an important activity in the political blogosphere, giving a blog site the potential to become a discussion forum. For a given blog post, we treat comment volume as a target output variable, and use generative probabilistic models to learn from past data the relationship between a blog post’s text contents and its comment volume. While many clues might be useful in predicting comment volume (e.g., the post’s author, the time the post appears, the length of the post, etc.) here we focus solely on the text contents of the post. We first describe the data and experimental framework, including a simple baseline. We then explore how latentvariable topic models can be used to make better predictions about comment volume. These models reveal that part of the variation in comment volume can be explained by the topic of the blog post, and elucidate the relative degrees to which readers find each topic comment-worthy. ∗The authors acknowledge research support from HP Labs and helpful comments from the reviewers and Jacob Eisenstein. Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Predicting Comment Volume Our goal is to predict some measure of the volume of comments on a new blog post.1 Volume might be measured as the number of words in the comment section, the number of comments, the number of distinct users who leave comments, or a variety of other ways. Any of these can be affected by uninteresting factors—the time of day the post appears, a side conversation, a surge in spammer activity—but these quantities are easily measured. In research on blog data, comments are often ignored, and it is easy to see why: comments are very noisy, full of non-standard grammar and spelling, usually unedited, often cryptic and uninformative, at least to those outside the blog’s community. A few studies have focused on information in comments. Mishe and Glance (2006) showed the value of comments in characterizing the social repercussions of a post, including popularity and controversy. Their largescale user study correlated popularity and comment activity. Yano et al. (2009) sought to predict which members of blog’s community would leave comments, and in some cases used the text contents of the comments themselves to discover topics related to both words and user comment behavior. This work is similar, but we seek to predict the aggregate behavior of the blog post’s readers: given a new blog post, how much will the community comment on it?",
"title": ""
},
{
"docid": "40479536efec6311cd735f2bd34605d7",
"text": "The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process (GP), a well-known non-parametric and interpretable Bayesian model, which suffers from cubic complexity to training size. To improve the scalability while retaining the desirable prediction quality, a variety of scalable GPs have been presented. But they have not yet been comprehensively reviewed and discussed in a unifying way in order to be well understood by both academia and industry. To this end, this paper devotes to reviewing state-of-theart scalable GPs involving two main categories: global approximations which distillate the entire data and local approximations which divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations which modify the prior but perform exact inference, and posterior approximations which retain exact prior but perform approximate inference; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and model capability of scalable GPs are reviewed. Finally, the extensions and open issues regarding the implementation of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.",
"title": ""
},
{
"docid": "68d8834770c34450adc96ed96299ae48",
"text": "This thesis presents a current-mode CMOS image sensor using lateral bipolar phototransistors (LPTs). The objective of this design is to improve the photosensitivity of the image sensor, and to provide photocurrent amplification at the circuit level. Lateral bipolar phototransistors can be implemented using a standard CMOS technology with no process modification. Under illumination, photogenerated carriers contribute to the base current, and the output emitter current is amplified through the transistor action of the bipolar device. Our analysis and simulation results suggest that the LPT output characteristics are strongly dependent on process parameters including base and emitter doping concentrations, as well as the device geometry such as the base width. For high current gain, a minimized base width is desired. The 2D effect of current crowding has also been discussed. Photocurrent can be further increased using amplifying current mirrors in the pixel and column structures. A prototype image sensor has been designed and fabricated in a standard 0.18μm CMOS technology. This design includes a photodiode image array and a LPT image array, each 70× 48 in dimension. For both arrays, amplifying current mirrors are included in the pixel readout structure and at the column level. Test results show improvements in both photosensitivity and conversion efficiency. The LPT also exhibits a better spectral response in the red region of the spectrum, because of the nwell/p-substrate depletion region. On the other hand, dark current, fixed pattern noise (FPN), and power consumption also increase due to current amplification. This thesis has demonstrated that the use of lateral bipolar phototransistors and amplifying current mirrors can help to overcome low photosensitivity and other deterioration imposed by technology scaling. The current-mode readout scheme with LPT-based photodetectors can be used as a front end to additional image processing circuits.",
"title": ""
},
{
"docid": "335220bbad7798a19403d393bcbbf7fb",
"text": "In today’s computerized and information-based society, text data is rich but messy. People are soaked with vast amounts of natural-language text data, ranging from news articles, social media post, advertisements, to a wide range of textual information from various domains (medical records, corporate reports). To turn such massive unstructured text data into actionable knowledge, one of the grand challenges is to gain an understanding of the factual information (e.g., entities, attributes, relations, events) in the text. In this tutorial, we introduce data-driven methods to construct structured information networks (where nodes are different types of entities attached with attributes, and edges are different relations between entities) for text corpora of different kinds (especially for massive, domain-specific text corpora) to represent their factual information. We focus on methods that are minimally-supervised, domain-independent, and languageindependent for fast network construction across various application domains (news, web, biomedical, reviews). We demonstrate on real datasets including news articles, scientific publications, tweets and reviews how these constructed networks aid in text analytics and knowledge discovery at a large scale.",
"title": ""
},
{
"docid": "d8eab1f244bd5f9e05eb706bb814d299",
"text": "Private participation in road projects is increasing around the world. The most popular franchising mechanism is a concession contract, which allows a private firm to charge tolls to road users during a pre-determined period in order to recover its investments. Concessionaires are usually selected through auctions at which candidates submit bids for tolls, payments to the government, or minimum term to hold the contract. This paper discusses, in the context of road franchising, how this mechanism does not generally yield optimal outcomes and it induces the frequent contract renegotiations observed in road projects. A new franchising mechanism is proposed, based on flexible-term contracts and auctions with bids for total net revenue and maintenance costs. This new mechanism improves outcomes compared to fixed-term concessions, by eliminating traffic risk and promoting the selection of efficient concessionaires.",
"title": ""
},
{
"docid": "155de33977b33d2f785fd86af0aa334f",
"text": "Model-based analysis tools, built on assumptions and simplifications, are difficult to handle smart grids with data characterized by volume, velocity, variety, and veracity (i.e., 4Vs data). This paper, using random matrix theory (RMT), motivates data-driven tools to perceive the complex grids in high-dimension; meanwhile, an architecture with detailed procedures is proposed. In algorithm perspective, the architecture performs a high-dimensional analysis and compares the findings with RMT predictions to conduct anomaly detections. Mean spectral radius (MSR), as a statistical indicator, is defined to reflect the correlations of system data in different dimensions. In management mode perspective, a group-work mode is discussed for smart grids operation. This mode breaks through regional limitations for energy flows and data flows, and makes advanced big data analyses possible. For a specific large-scale zone-dividing system with multiple connected utilities, each site, operating under the group-work mode, is able to work out the regional MSR only with its own measured/simulated data. The large-scale interconnected system, in this way, is naturally decoupled from statistical parameters perspective, rather than from engineering models perspective. Furthermore, a comparative analysis of these distributed MSRs, even with imperceptible different raw data, will produce a contour line to detect the event and locate the source. It demonstrates that the architecture is compatible with the block calculation only using the regional small database; beyond that, this architecture, as a data-driven solution, is sensitive to system situation awareness, and practical for real large-scale interconnected systems. Five case studies and their visualizations validate the designed architecture in various fields of power systems. To our best knowledge, this paper is the first attempt to apply big data technology into smart grids.",
"title": ""
},
{
"docid": "e75f830b902ca7d0e8d9e9fa03a62440",
"text": "Changes in synaptic connections are considered essential for learning and memory formation. However, it is unknown how neural circuits undergo continuous synaptic changes during learning while maintaining lifelong memories. Here we show, by following postsynaptic dendritic spines over time in the mouse cortex, that learning and novel sensory experience lead to spine formation and elimination by a protracted process. The extent of spine remodelling correlates with behavioural improvement after learning, suggesting a crucial role of synaptic structural plasticity in memory formation. Importantly, a small fraction of new spines induced by novel experience, together with most spines formed early during development and surviving experience-dependent elimination, are preserved and provide a structural basis for memory retention throughout the entire life of an animal. These studies indicate that learning and daily sensory experience leave minute but permanent marks on cortical connections and suggest that lifelong memories are stored in largely stably connected synaptic networks.",
"title": ""
},
{
"docid": "f96098449988c433fe8af20be0c468a5",
"text": "Programmatic assessment is an integral approach to the design of an assessment program with the intent to optimise its learning function, its decision-making function and its curriculum quality-assurance function. Individual methods of assessment, purposefully chosen for their alignment with the curriculum outcomes and their information value for the learner, the teacher and the organisation, are seen as individual data points. The information value of these individual data points is maximised by giving feedback to the learner. There is a decoupling of assessment moment and decision moment. Intermediate and high-stakes decisions are based on multiple data points after a meaningful aggregation of information and supported by rigorous organisational procedures to ensure their dependability. Self-regulation of learning, through analysis of the assessment information and the attainment of the ensuing learning goals, is scaffolded by a mentoring system. Programmatic assessment-for-learning can be applied to any part of the training continuum, provided that the underlying learning conception is constructivist. This paper provides concrete recommendations for implementation of programmatic assessment.",
"title": ""
},
{
"docid": "546296aecaee9963ee7495c9fbf76fd4",
"text": "In this paper, we propose text summarization method that creates text summary by definition of the relevance score of each sentence and extracting sentences from the original documents. While summarization this method takes into account weight of each sentence in the document. The essence of the method suggested is in preliminary identification of every sentence in the document with characteristic vector of words, which appear in the document, and calculation of relevance score for each sentence. The relevance score of sentence is determined through its comparison with all the other sentences in the document and with the document title by cosine measure. Prior to application of this method the scope of features is defined and then the weight of each word in the sentence is calculated with account of those features. The weights of features, influencing relevance of words, are determined using genetic algorithms.",
"title": ""
}
] | scidocsrr |
04836cd980c5022b30d361d29baf4097 | A wearable system that knows who wears it | [
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
},
{
"docid": "b7aca26bc09bbc9376fefd1befec2b28",
"text": "Wearable sensor systems have been used in the ubiquitous computing community and elsewhere for applications such as activity and gesture recognition, health and wellness monitoring, and elder care. Although the power consumption of accelerometers has already been highly optimized, this work introduces a novel sensing approach which lowers the power requirement for motion sensing by orders of magnitude. We present an ultra-low-power method for passively sensing body motion using static electric fields by measuring the voltage at any single location on the body. We present the feasibility of using this sensing approach to infer the amount and type of body motion anywhere on the body and demonstrate an ultra-low-power motion detector used to wake up more power-hungry sensors. The sensing hardware consumes only 3.3 μW, and wake-up detection is done using an additional 3.3 μW (6.6 μW total).",
"title": ""
},
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
}
] | [
{
"docid": "19a47559acfc6ee0ebb0c8e224090e28",
"text": "Learning from streams of evolving and unbounded data is an important problem, for example in visual surveillance or internet scale data. For such large and evolving real-world data, exhaustive supervision is impractical, particularly so when the full space of classes is not known in advance therefore joint class discovery (exploration) and boundary learning (exploitation) becomes critical. Active learning has shown promise in jointly optimising exploration-exploitation with minimal human supervision. However, existing active learning methods either rely on heuristic multi-criteria weighting or are limited to batch processing. In this paper, we present a new unified framework for joint exploration-exploitation active learning in streams without any heuristic weighting. Extensive evaluation on classification of various image and surveillance video datasets demonstrates the superiority of our framework over existing methods.",
"title": ""
},
{
"docid": "8d2d3b326c246bde95b360c9dcf6540f",
"text": "A field experiment was carried out at the Shenyang Experimental Station of Ecology (CAS) in order to study the effects of slow-release urea fertilizers high polymer-coated urea (SRU1), SRU1 mixed with dicyandiamide DCD (SRU2), and SRU1 mixed with calcium carbide CaC2 (SRU3) on urease activity, microbial biomass C and N, and nematode communities in an aquic brown soil during the maize growth period. The results demonstrated that the application of slow-release urea fertilizers inhibits soil urease activity and increases the soil NH4 +-N content. Soil available N increment could promote its immobilization by microorganisms. Determination of soil microbial biomass N indicated that a combined application of coated urea and nitrification inhibitors increased the soil active N pool. The population of predators/omnivores indicated that treatment with SRU2 could provide enough soil NH4 +-N to promote maize growth and increased the food resource for the soil fauna compared with the other treatments.",
"title": ""
},
{
"docid": "d337f149d3e52079c56731f4f3d8ea3e",
"text": "Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks. However, many questions remain as to how and why these models are so effective. In this paper, we present a detailed empirical study of how the choice of neural architecture (e.g. LSTM, CNN, or self attention) influences both end task accuracy and qualitative properties of the representations that are learned. We show there is a tradeoff between speed and accuracy, but all architectures learn high quality contextual representations that outperform word embeddings for four challenging NLP tasks. Additionally, all architectures learn representations that vary with network depth, from exclusively morphological based at the word embedding layer through local syntax based in the lower contextual layers to longer range semantics such coreference at the upper layers. Together, these results suggest that unsupervised biLMs, independent of architecture, are learning much more about the structure of language than previously appreciated.",
"title": ""
},
{
"docid": "26befbb36d5d64ff0c075b38cde32d6f",
"text": "This study deals with the problems related to the translation of political texts in the theoretical framework elaborated by the researchers working in the field of translation studies and reflects on the terminological peculiarities of the special language used for this text type . Consideration of the theoretical framework is followed by the analysis of a specific text spoken then written in English and translated into Hungarian and Romanian. The conclusions are intended to highlight the fact that there are no recipes for translating a political speech, because translation is not only a technical process that uses translation procedures and applies transfer operations, but also a matter of understanding cultural, historical and political situations and their significance.",
"title": ""
},
{
"docid": "8492ba0660b06ca35ab3f4e96f3a33c3",
"text": "Young men who have sex with men (YMSM) are increasingly using mobile smartphone applications (“apps”), such as Grindr, to meet sex partners. A probability sample of 195 Grindr-using YMSM in Southern California were administered an anonymous online survey to assess patterns of and motivations for Grindr use in order to inform development and tailoring of smartphone-based HIV prevention for YMSM. The number one reason for using Grindr (29 %) was to meet “hook ups.” Among those participants who used both Grindr and online dating sites, a statistically significantly greater percentage used online dating sites for “hook ups” (42 %) compared to Grindr (30 %). Seventy percent of YMSM expressed a willingness to participate in a smartphone app-based HIV prevention program. Development and testing of smartphone apps for HIV prevention delivery has the potential to engage YMSM in HIV prevention programming, which can be tailored based on use patterns and motivations for use. Los hombres que mantienen relaciones sexuales con hombres (YMSM por las siglas en inglés de Young Men Who Have Sex with Men) están utilizando más y más aplicaciones para teléfonos inteligentes (smartphones), como Grindr, para encontrar parejas sexuales. En el Sur de California, se administró de forma anónima un sondeo en internet a una muestra de probabilidad de 195 YMSM usuarios de Grindr, para evaluar los patrones y motivaciones del uso de Grindr, con el fin de utilizar esta información para el desarrollo y personalización de prevención del VIH entre YMSM con base en teléfonos inteligentes. La principal razón para utilizar Grindr (29 %) es para buscar encuentros sexuales casuales (hook-ups). Entre los participantes que utilizan tanto Grindr como otro sitios de citas online, un mayor porcentaje estadísticamente significativo utilizó los sitios de citas online para encuentros casuales sexuales (42 %) comparado con Grindr (30 %). Un setenta porciento de los YMSM expresó su disposición para participar en programas de prevención del VIH con base en teléfonos inteligentes. El desarrollo y evaluación de aplicaciones para teléfonos inteligentes para el suministro de prevención del VIH tiene el potencial de involucrar a los YMSM en la programación de la prevención del VIH, que puede ser adaptada según los patrones y motivaciones de uso.",
"title": ""
},
{
"docid": "ddb77ec8a722c50c28059d03919fb299",
"text": "Among the smart cities applications, optimizing lottery games is one of the urgent needs to ensure their fairness and transparency. The emerging blockchain technology shows a glimpse of solutions to fairness and transparency issues faced by lottery industries. This paper presents the design of a blockchain-based lottery system for smart cities applications. We adopt the smart contracts of blockchain technology and the cryptograph blockchain model, Hawk [8], to design the blockchain-based lottery system, FairLotto, for future smart cities applications. Fairness, transparency, and privacy of the proposed blockchain-based lottery system are discussed and ensured.",
"title": ""
},
{
"docid": "cfadfcbc3929b5552119a4f8cb211b33",
"text": "The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is—as we discuss in this paper— well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.",
"title": ""
},
{
"docid": "58b4320c2cf52c658275eaa4748dede5",
"text": "Backing-out and heading-out maneuvers in perpendicular or angle parking lots are one of the most dangerous maneuvers, especially in cases where side parked cars block the driver view of the potential traffic flow. In this paper, a new vision-based Advanced Driver Assistance System (ADAS) is proposed to automatically warn the driver in such scenarios. A monocular grayscale camera was installed at the back-right side of a vehicle. A Finite State Machine (FSM) defined according to three CAN Bus variables and a manual signal provided by the user is used to handle the activation/deactivation of the detection module. The proposed oncoming traffic detection module computes spatio-temporal images from a set of predefined scan-lines which are related to the position of the road. A novel spatio-temporal motion descriptor is proposed (STHOL) accounting for the number of lines, their orientation and length of the spatio-temporal images. Some parameters of the proposed descriptor are adapted for nighttime conditions. A Bayesian framework is then used to trigger the warning signal using multivariate normal density functions. Experiments are conducted on image data captured from a vehicle parked at different location of an urban environment, including both daytime and nighttime lighting conditions. We demonstrate that the proposed approach provides robust results maintaining processing rates close to real time.",
"title": ""
},
{
"docid": "9a2b499cf1ed10403a55f2557c00dedf",
"text": "Polar codes are a recently discovered family of capacity-achieving codes that are seen as a major breakthrough in coding theory. Motivated by the recent rapid progress in the theory of polar codes, we propose a semi-parallel architecture for the implementation of successive cancellation decoding. We take advantage of the recursive structure of polar codes to make efficient use of processing resources. The derived architecture has a very low processing complexity while the memory complexity remains similar to that of previous architectures. This drastic reduction in processing complexity allows very large polar code decoders to be implemented in hardware. An N=217 polar code successive cancellation decoder is implemented in an FPGA. We also report synthesis results for ASIC.",
"title": ""
},
{
"docid": "9def5ba1b4b262b8eb71123023c00e36",
"text": "OBJECTIVE\nThe primary objective of this study was to compare clinically and radiographically the efficacy of autologous platelet rich fibrin (PRF) and autogenous bone graft (ABG) obtained using bone scrapper in the treatment of intrabony periodontal defects.\n\n\nMATERIALS AND METHODS\nThirty-eight intrabony defects (IBDs) were treated with either open flap debridement (OFD) with PRF or OFD with ABG. Clinical parameters were recorded at baseline and 6 months postoperatively. The defect-fill and defect resolution at baseline and 6 months were calculated radiographically (intraoral periapical radiographs [IOPA] and orthopantomogram [OPG]).\n\n\nRESULTS\nSignificant probing pocket depth (PPD) reduction, clinical attachment level (CAL) gain, defect fill and defect resolution at both PRF and ABG treated sites with OFD was observed. However, inter-group comparison was non-significant (P > 0.05). The bivariate correlation results revealed that any of the two radiographic techniques (IOPA and OPG) can be used for analysis of the regenerative therapy in IBDs.\n\n\nCONCLUSION\nThe use of either PRF or ABG were effective in the treatment of three wall IBDs with an uneventful healing of the sites.",
"title": ""
},
{
"docid": "b4910e355c44077eb27c62a0c8237204",
"text": "Our proof is built on Perron-Frobenius theorem, a seminal work in matrix theory (Meyer 2000). By Perron-Frobenius theorem, the power iteration algorithm for predicting top K persuaders converges to a unique C and this convergence is independent of the initialization of C if the persuasion probability matrix P is nonnegative, irreducible, and aperiodic (Heath 2002). We first show that P is nonnegative. Each component of the right hand side of Equation (10) is positive except nD $ 0; thus, persuasion probability pij estimated with Equation (10) is positive, for all i, j = 1, 2, ..., n and i ... j. Because all diagonal elements of P are equal to zero and all non-diagonal elements of P are positive persuasion probabilities, P is nonnegative.",
"title": ""
},
{
"docid": "d972e23eb49c15488d2159a9137efb07",
"text": "One of the main challenges of the solid-state transformer (SST) lies in the implementation of the dc–dc stage. In this paper, a quadruple-active-bridge (QAB) dc–dc converter is investigated to be used as a basic module of a modular three-stage SST. Besides the feature of high power density and soft-switching operation (also found in others converters), the QAB converter provides a solution with reduced number of high-frequency transformers, since more bridges are connected to the same multiwinding transformer. To ensure soft switching for the entire operation range of the QAB converter, the triangular current-mode modulation strategy, previously adopted for the dual-active-bridge converter, is extended to the QAB converter. The theoretical analysis is developed considering balanced (equal power processed by the medium-voltage (MV) cells) and unbalanced (unequal power processed by the MV cells) conditions. In order to validate the theoretical analysis developed in the paper, a 2-kW prototype is built and experimented.",
"title": ""
},
{
"docid": "d4cd0dabcf4caa22ad92fab40844c786",
"text": "NA",
"title": ""
},
{
"docid": "4a0756bffc50e11a0bcc2ab88502e1a2",
"text": "The interest in attribute weighting for soft subspace clustering have been increasing in the last years. However, most of the proposed approaches are designed for dealing only with numeric data. In this paper, our focus is on soft subspace clustering for categorical data. In soft subspace clustering, the attribute weighting approach plays a crucial role. Due to this, we propose an entropy-based approach for measuring the relevance of each categorical attribute in each cluster. Besides that, we propose the EBK-modes (entropy-based k-modes), an extension of the basic k-modes that uses our approach for attribute weighting. We performed experiments on five real-world datasets, comparing the performance of our algorithms with four state-of-the-art algorithms, using three well-known evaluation metrics: accuracy, f-measure and adjusted Rand index. According to the experiments, the EBK-modes outperforms the algorithms that were considered in the evaluation, regarding the considered metrics.",
"title": ""
},
{
"docid": "3be38e070678e358e23cb81432033062",
"text": "W ireless integrated network sensors (WINS) provide distributed network and Internet access to sensors, controls, and processors deeply embedded in equipment, facilities, and the environment. The WINS network represents a new monitoring and control capability for applications in such industries as transportation, manufacturing, health care, environmental oversight, and safety and security. WINS combine microsensor technology and low-power signal processing, computation, and low-cost wireless networking in a compact system. Recent advances in integrated circuit technology have enabled construction of far more capable yet inexpensive sensors, radios, and processors, allowing mass production of sophisticated systems linking the physical world to digital data networks [2–5]. Scales range from local to global for applications in medicine, security, factory automation, environmental monitoring, and condition-based maintenance. Compact geometry and low cost allow WINS to be embedded and distributed at a fraction of the cost of conventional wireline sensor and actuator systems. WINS opportunities depend on development of a scalable, low-cost, sensor-network architecture. Such applications require delivery of sensor information to the user at a low bit rate through low-power transceivers. Continuous sensor signal processing enables the constant monitoring of events in an environment in which short message packets would suffice. Future applications of distributed embedded processors and sensors will require vast numbers of devices. Conventional methods of sensor networking represent an impractical demand on cable installation and network bandwidth. Processing at the source would drastically reduce the financial, computational, and management burden on communication system",
"title": ""
},
{
"docid": "1dc07b02a70821fdbaa9911755d1e4b0",
"text": "The AROMA project is exploring the kind of awareness that people effortless are able to maintain about other beings who are located physically close. We are designing technology that attempts to mediate a similar kind of awareness among people who are geographically dispersed but want to stay better in touch. AROMA technology can be thought of as a stand-alone communication device or -more likely -an augmentation of existing technologies such as the telephone or full-blown media spaces. Our approach differs from other recent designs for awareness (a) by choosing pure abstract representations on the display site, (b) by possibly remapping the signal across media between capture and display, and, finally, (c) by explicitly extending the application domain to include more than the working life, to embrace social interaction in general. We are building a series of prototypes to learn if abstract representation of activity data does indeed convey a sense of remote presence and does so in a sutTiciently subdued manner to allow the user to concentrate on his or her main activity. We have done some initial testing of the technical feasibility of our designs. What still remains is an extensive effort of designing a symbolic language of remote presence, done in parallel with studies of how people will connect and communicate through such a language as they live with the AROMA system.",
"title": ""
},
{
"docid": "ae0ef7702fca274bd4ee8a2a30479275",
"text": "This paper describes the drawbacks related to the iron in the classical electrodynamic loudspeaker structure. Then it describes loudspeaker motors without any iron, which are only made of permanent magnets. They are associated to a piston like moving part which glides on ferrofluid seals. Furthermore, the coil is short and the suspension is wholly pneumatic. Several types of magnet assemblies are described and discussed. Indeed, their properties regarding the force factor and the ferrofluid seal shape depend on their structure. Eventually, the capacity of the seals is evaluated.",
"title": ""
},
{
"docid": "89b54aa0009598a4cb159b196f3749ee",
"text": "Several methods and techniques are potentially useful for the preparation of microparticles in the field of controlled drug delivery. The type and the size of the microparticles, the entrapment, release characteristics and stability of drug in microparticles in the formulations are dependent on the method used. One of the most common methods of preparing microparticles is the single emulsion technique. Poorly soluble, lipophilic drugs are successfully retained within the microparticles prepared by this method. However, the encapsulation of highly water soluble compounds including protein and peptides presents formidable challenges to the researchers. The successful encapsulation of such compounds requires high drug loading in the microparticles, prevention of protein and peptide degradation by the encapsulation method involved and predictable release, both rate and extent, of the drug compound from the microparticles. The above mentioned problems can be overcome by using the double emulsion technique, alternatively called as multiple emulsion technique. Aiming to achieve this various techniques have been examined to prepare stable formulations utilizing w/o/w, s/o/w, w/o/o, and s/o/o type double emulsion methods. This article reviews the current state of the art in double emulsion based technologies for the preparation of microparticles including the investigation of various classes of substances that are pharmaceutically and biopharmaceutically active.",
"title": ""
},
{
"docid": "6fd9793e9f44b726028f8c879157f1f7",
"text": "Modeling, simulation and implementation of Voltage Source Inverter (VSI) fed closed loop control of 3-phase induction motor drive is presented in this paper. A mathematical model of the drive system is developed and is used for the simulation study. Simulation is carried out using Scilab/Scicos, which is free and open source software. The above said drive system is implemented in laboratory using a PC and an add-on card. In this study the air gap flux of the machine is kept constant by maintaining Volt/Hertz (v/f) ratio constant. The experimental transient responses of the drive system obtained for change in speed under no load as well as under load conditions are presented.",
"title": ""
},
{
"docid": "cb19facb61dae863c566f5fafd9f8b20",
"text": "This paper describes our solution for the 2 YouTube-8M video understanding challenge organized by Google AI. Unlike the video recognition benchmarks, such as Kinetics and Moments, the YouTube8M challenge provides pre-extracted visual and audio features instead of raw videos. In this challenge, the submitted model is restricted to 1GB, which encourages participants focus on constructing one powerful single model rather than incorporating of the results from a bunch of models. Our system fuses six different sub-models into one single computational graph, which are categorized into three families. More specifically, the most effective family is the model with non-local operations following the NetVLAD encoding. The other two family models are Soft-BoF and GRU, respectively. In order to further boost single models performance, the model parameters of different checkpoints are averaged. Experimental results demonstrate that our proposed system can effectively perform the video classification task, achieving 0.88763 on the public test set and 0.88704 on the private set in terms of GAP@20, respectively. We finally ranked at the fourth place in the YouTube-8M video understanding challenge.",
"title": ""
}
] | scidocsrr |
7e7272379f6c262e43cf408524551964 | Steady-State Mean-Square Error Analysis for Adaptive Filtering under the Maximum Correntropy Criterion | [
{
"docid": "7a7e0363ca4ad5c83a571449f53834ca",
"text": "Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L1 norm when outliers occur.",
"title": ""
}
] | [
{
"docid": "a14ac26274448e0a7ecafdecae4830f9",
"text": "Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.",
"title": ""
},
{
"docid": "dc8ffc5fd84b3af4cc88d75f7bc88f77",
"text": "Digital crimes is big problem due to large numbers of data access and insufficient attack analysis techniques so there is the need for improvements in existing digital forensics techniques. With growing size of storage capacity these digital forensic investigations are getting more difficult. Visualization allows for displaying large amounts of data at once. Integrated visualization of data distribution bars and rules, visualization of behaviour and comprehensive analysis, maps allow user to analyze different rules and data at different level, with any kind of anomaly in data. Data mining techniques helps to improve the process of visualization. These papers give comprehensive review on various visualization techniques with various anomaly detection techniques.",
"title": ""
},
{
"docid": "315af705427ee4363fe4614dc72eb7a7",
"text": "The 2007 Nobel Prize in Physics can be understood as a global recognition to the rapid development of the Giant Magnetoresistance (GMR), from both the physics and engineering points of view. Behind the utilization of GMR structures as read heads for massive storage magnetic hard disks, important applications as solid state magnetic sensors have emerged. Low cost, compatibility with standard CMOS technologies and high sensitivity are common advantages of these sensors. This way, they have been successfully applied in a lot different environments. In this work, we are trying to collect the Spanish contributions to the progress of the research related to the GMR based sensors covering, among other subjects, the applications, the sensor design, the modelling and the electronic interfaces, focusing on electrical current sensing applications.",
"title": ""
},
{
"docid": "5006770c9f7a6fb171a060ad3d444095",
"text": "We developed a 56-GHz-bandwidth 2.0-Vppd linear MZM driver in 65-nm CMOS. It consumes only 180 mW for driving a 50-Ω impedance. We demonstrated the feasibility of drivers with less than 1 W for dual-polarization IQ modulation in 400-Gb/s systems.",
"title": ""
},
{
"docid": "7d84e574d2a6349a9fc2669fdbe08bba",
"text": "Domain-specific languages (DSLs) provide high-level and domain-specific abstractions that allow expressive and concise algorithm descriptions. Since the description in a DSL hides also the properties of the target hardware, DSLs are a promising path to target different parallel and heterogeneous hardware from the same algorithm description. In theory, the DSL description can capture all characteristics of the algorithm that are required to generate highly efficient parallel implementations. However, most frameworks do not make use of this knowledge and the performance cannot reach that of optimized library implementations. In this article, we present the HIPAcc framework, a DSL and source-to-source compiler for image processing. We show that domain knowledge can be captured in the language and that this knowledge enables us to generate tailored implementations for a given target architecture. Back ends for CUDA, OpenCL, and Renderscript allow us to target discrete graphics processing units (GPUs) as well as mobile, embedded GPUs. Exploiting the captured domain knowledge, we can generate specialized algorithm variants that reach the maximal achievable performance due to the peak memory bandwidth. These implementations outperform state-of-the-art domain-specific languages and libraries significantly.",
"title": ""
},
{
"docid": "6838cf1310f0321cd524bb1120f35057",
"text": "One of the most compelling visions of future robots is that of the robot butler. An entity dedicated to fulfilling your every need. This obviously has its benefits, but there could be a flipside to this vision. To fulfill the needs of its users, it must first be aware of them, and so it could potentially amass a huge amount of personal data regarding its user, data which may or may not be safe from accidental or intentional disclosure to a third party. How may prospective owners of a personal robot feel about the data that might be collected about them? In order to investigate this issue experimentally, we conducted an exploratory study where 12 participants were exposed to an HRI scenario in which disclosure of personal information became an issue. Despite the small sample size interesting results emerged from this study, indicating how future owners of personal robots feel regarding what the robot will know about them, and what safeguards they believe should be in place to protect owners from unwanted disclosure of private information.",
"title": ""
},
{
"docid": "8f978ac84eea44a593e9f18a4314342c",
"text": "There is clear evidence that interpersonal social support impacts stress levels and, in turn, degree of physical illness and psychological well-being. This study examines whether mediated social networks serve the same palliative function. A survey of 401 undergraduate Facebook users revealed that, as predicted, number of Facebook friends associated with stronger perceptions of social support, which in turn associated with reduced stress, and in turn less physical illness and greater well-being. This effect was minimized when interpersonal network size was taken into consideration. However, for those who have experienced many objective life stressors, the number of Facebook friends emerged as the stronger predictor of perceived social support. The \"more-friends-the-better\" heuristic is proposed as the most likely explanation for these findings.",
"title": ""
},
{
"docid": "4dc302fc2001dda1d24d830bb43f9cfa",
"text": "Discussions of qualitative research interviews have centered on promoting an ideal interactional style and articulating the researcher behaviors by which this might be realized. Although examining what researchers do in an interview continues to be valuable, this focus obscures the reflexive engagement of all participants in the exchange and the potential for a variety of possible styles of interacting. The author presents her analyses of participants’ accounts of past research interviews and explores the implications of this for researchers’ orientation to qualitative research inter-",
"title": ""
},
{
"docid": "2031114bd1dc1a3ca94bdd8a13ad3a86",
"text": "Crude extracts of curcuminoids and essential oil of Curcuma longa varieties Kasur, Faisalabad and Bannu were studied for their antibacterial activity against 4 bacterial strains viz., Bacillus subtilis, Bacillus macerans, Bacillus licheniformis and Azotobacter using agar well diffusion method. Solvents used to determine antibacterial activity were ethanol and methanol. Ethanol was used for the extraction of curcuminoids. Essential oil was extracted by hydrodistillation and diluted in methanol by serial dilution method. Both Curcuminoids and oil showed zone of inhibition against all tested strains of bacteria. Among all the three turmeric varieties, Kasur variety had the most inhibitory effect on the growth of all bacterial strains tested as compared to Faisalabad and Bannu varieties. Among all the bacterial strains B. subtilis was the most sensitive to turmeric extracts of curcuminoids and oil. The MIC value for different strains and varieties ranged from 3.0 to 20.6 mm in diameter.",
"title": ""
},
{
"docid": "1b802879e554140e677020e379b866c1",
"text": "This study investigated vertical versus shared leadership as predictors of the effectiveness of 71 change management teams. Vertical leadership stems from an appointed or formal leader of a team, whereas shared leadership (C. L. Pearce, 1997; C. L. Pearce & J. A. Conger, in press; C. L. Pearce & H. P. Sims, 2000) is a group process in which leadership is distributed among, and stems from, team members. Team effectiveness was measured approximately 6 months after the assessment of leadership and was also measured from the viewpoints of managers, internal customers, and team members. Using multiple regression, the authors found both vertical and shared leadership to be significantly related to team effectiveness ( p .05), although shared leadership appears to be a more useful predictor of team effectiveness than vertical leadership.",
"title": ""
},
{
"docid": "ae70b9ef5eeb6316b5b022662191cc4f",
"text": "The total harmonic distortion (THD) is an important performance criterion for almost any communication device. In most cases, the THD of a periodic signal, which has been processed in some way, is either measured directly or roughly estimated numerically, while analytic methods are employed only in a limited number of simple cases. However, the knowledge of the theoretical THD may be quite important for the conception and design of the communication equipment (e.g. transmitters, power amplifiers). The aim of this paper is to present a general theoretic approach, which permits to obtain an analytic closed-form expression for the THD. It is also shown that in some cases, an approximate analytic method, having good precision and being less sophisticated, may be developed. Finally, the mathematical technique, on which the proposed method is based, is described in the appendix.",
"title": ""
},
{
"docid": "4463a242a313f82527c4bdfff3d3c13c",
"text": "This paper examines the impact of capital structure on financial performance of Nigerian firms using a sample of thirty non-financial firms listed on the Nigerian Stock Exchange during the seven year period, 2004 – 2010. Panel data for the selected firms were generated and analyzed using ordinary least squares (OLS) as a method of estimation. The result shows that a firm’s capita structure surrogated by Debt Ratio, Dr has a significantly negative impact on the firm’s financial measures (Return on Asset, ROA, and Return on Equity, ROE). The study of these findings, indicate consistency with prior empirical studies and provide evidence in support of Agency cost theory.",
"title": ""
},
{
"docid": "a90dd405d9bd2ed912cacee098c0f9db",
"text": "Many telecommunication companies today have actively started to transform the way they do business, going beyond communication infrastructure providers are repositioning themselves as data-driven service providers to create new revenue streams. In this paper, we present a novel industrial application where a scalable Big data approach combined with deep learning is used successfully to classify massive mobile web log data, to get new aggregated insights on customer web behaviors that could be applied to various industry verticals.",
"title": ""
},
{
"docid": "9bb88b82789d43e48b1e8a10701d39bd",
"text": "Building intelligent systems that are capable of extracting high-level representations from high-dimensional sensory data lies at the core of solving many artificial intelligence–related tasks, including object recognition, speech perception, and language understanding. Theoretical and biological arguments strongly suggest that building such systems requires models with deep architectures that involve many layers of nonlinear processing. In this article, we review several popular deep learning models, including deep belief networks and deep Boltzmann machines. We show that (a) these deep generative models, which contain many layers of latent variables and millions of parameters, can be learned efficiently, and (b) the learned high-level feature representations can be successfully applied in many application domains, including visual object recognition, information retrieval, classification, and regression tasks.",
"title": ""
},
{
"docid": "584e84ac1a061f1bf7945ab4cf54d950",
"text": "Paul White, PhD, MD§ Acupuncture has been used in China and other Asian countries for the past 3000 yr. Recently, this technique has been gaining increased popularity among physicians and patients in the United States. Even though acupuncture-induced analgesia is being used in many pain management programs in the United States, the mechanism of action remains unclear. Studies suggest that acupuncture and related techniques trigger a sequence of events that include the release of neurotransmitters, endogenous opioid-like substances, and activation of c-fos within the central nervous system. Recent developments in central nervous system imaging techniques allow scientists to better evaluate the chain of events that occur after acupuncture-induced stimulation. In this review article we examine current biophysiological and imaging studies that explore the mechanisms of acupuncture analgesia.",
"title": ""
},
{
"docid": "fce8f5ee730e2bbb63f4d1ef003ce830",
"text": "In this paper, we introduce an approach for constructing uncertainty sets for robust optimization using new deviation measures for random variables termed the forward and backward deviations. These deviation measures capture distributional asymmetry and lead to better approximations of chance constraints. Using a linear decision rule, we also propose a tractable approximation approach for solving a class of multistage chance-constrained stochastic linear optimization problems. An attractive feature of the framework is that we convert the original model into a second-order cone program, which is computationally tractable both in theory and in practice. We demonstrate the framework through an application of a project management problem with uncertain activity completion time.",
"title": ""
},
{
"docid": "3573fb077b151af3c83f7cd6a421dc9c",
"text": "Let G = (V, E) be a directed graph with a distinguished source vertex s. The single-source path expression problem is to find, for each vertex v, a regular expression P(s, v) which represents the set of all paths in G from s to v A solution to this problem can be used to solve shortest path problems, solve sparse systems of linear equations, and carry out global flow analysis. A method is described for computing path expressions by dwidmg G mto components, computing path expressions on the components by Gaussian elimination, and combining the solutions This method requires O(ma(m, n)) time on a reducible flow graph, where n Is the number of vertices m G, m is the number of edges in G, and a is a functional inverse of Ackermann's function The method makes use of an algonthm for evaluating functions defined on paths in trees. A smapllfied version of the algorithm, which runs in O(m log n) time on reducible flow graphs, is quite easy to implement and efficient m practice",
"title": ""
},
{
"docid": "b8e921733ef4ab77abcb48b0a1f04dbb",
"text": "This paper demonstrates the efficiency of kinematic redundancy used to increase the useable workspace of planar parallel mechanisms. As examples, we propose kinematically redundant schemes of the well known planar 3RRR and 3RPR mechanisms denoted as 3(P)RRR and 3(P)RPR. In both cases, a prismatic actuator is added allowing a usually fixed base joint to move linearly. Hence, reconfigurations can be performed selectively in order to avoid singularities and to affect the mechanisms' performance directly. Using an interval-based method the useable workspace, i.e. the singularity-free workspace guaranteeing a desired performance, is obtained. Due to the interval analysis any uncertainties can be implemented within the algorithm leading to practical and realistic results. It is shown that due to the additional prismatic actuator the useable workspace increases significantly. Several analysis examples clarify the efficiency of the proposed kinematically redundant mechanisms.",
"title": ""
},
{
"docid": "ba10de4e7613307d08b46cf001cbeb3b",
"text": "This paper builds on a general typology of textual communication (Aarseth 1997) and tries to establish a model for classifying the genre of “games in virtual environments”— that is, games that take place in some kind of simulated world, as opposed to purely abstract games like poker or blackjack. The aim of the model is to identify the main differences between games in a rigorous, analytical way, in order to come up with genres that are more specific and less ad hoc than those used by the industry and the popular gaming press. The model consists of a number of basic “dimensions”, such as Space, Perspective, Time, Teleology, etc, each of which has several variate values, (e.g. Teleology: finite (Half-Life) or infinite (EverQuest. Ideally, the multivariate model can be used to predict games that do not yet exist, but could be invented by combining the existing elements in new ways.",
"title": ""
},
{
"docid": "8188bcd3b95952dbf2818cad6fc2c36c",
"text": "Semi-supervised learning is by no means an unfamiliar concept to natural language processing researchers. Labeled data has been used to improve unsupervised parameter estimation procedures such as the EM algorithm and its variants since the beginning of the statistical revolution in NLP (e.g., Pereira and Schabes (1992)). Unlabeled data has also been used to improve supervised learning procedures, the most notable examples being the successful applications of self-training and co-training to word sense disambiguation (Yarowsky 1995) and named entity classification (Collins and Singer 1999). Despite its increasing importance, semi-supervised learning is not a topic that is typically discussed in introductory machine learning texts (e.g., Mitchell (1997), Alpaydin (2004)) or NLP texts (e.g., Manning and Schütze (1999), Jurafsky andMartin (2000)). Consequently, to learn about semi-supervised learning research, one has to consult the machine-learning literature. This can be a daunting task for NLP researchers who have little background in machine learning. Steven Abney’s book Semisupervised Learning for Computational Linguistics is targeted precisely at such researchers, aiming to provide them with a “broad and accessible presentation” of topics in semi-supervised learning. According to the preamble, the reader is assumed to have taken only an introductory course in NLP “that include statistical methods — concretely the material contained in Jurafsky andMartin (2000) andManning and Schütze (1999).”Nonetheless, I agreewith the author that any NLP researcher who has a solid background in machine learning is ready to “tackle the primary literature on semisupervised learning, and will probably not find this book particularly useful” (page 11). As the author promises, the book is self-contained and quite accessible to those who have little background in machine learning. In particular, of the 12 chapters in the book, three are devoted to preparatory material, including: a brief introduction to machine learning, basic unconstrained and constrained optimization techniques (e.g., gradient descent and the method of Lagrange multipliers), and relevant linear-algebra concepts (e.g., eigenvalues, eigenvectors, matrix and vector norms, diagonalization). The remaining chapters focus roughly on six types of semi-supervised learning methods:",
"title": ""
}
] | scidocsrr |
80e9309b3e9bb8f29e81d26f3cb8606b | The Incredible ELK | [
{
"docid": "87af3cf22afaf5903a521e653f693e6c",
"text": "Finding the justifications of an entailment (that is, all the minimal set of axioms sufficient to produce an entailment) has emerged as a key inference service for the Web Ontology Language (OWL). Justifications are essential for debugging unsatisfiable classes and contradictions. The availability of justifications as explanations of entailments improves the understandability of large and complex ontologies. In this paper, we present several algorithms for computing all the justifications of an entailment in an OWL-DL Ontology and show, by an empirical evaluation, that even a reasoner independent approach works well on real ontologies.",
"title": ""
},
{
"docid": "9814af3a2c855717806ad7496d21f40e",
"text": "This chapter gives an extended introduction to the lightweight profiles OWL EL, OWL QL, and OWL RL of the Web Ontology Language OWL. The three ontology language standards are sublanguages of OWL DL that are restricted in ways that significantly simplify ontological reasoning. Compared to OWL DL as a whole, reasoning algorithms for the OWL profiles show higher performance, are easier to implement, and can scale to larger amounts of data. Since ontological reasoning is of great importance for designing and deploying OWL ontologies, the profiles are highly attractive for many applications. These advantages come at a price: various modelling features of OWL are not available in all or some of the OWL profiles. Moreover, the profiles are mutually incomparable in the sense that each of them offers a combination of features that is available in none of the others. This chapter provides an overview of these differences and explains why some of them are essential to retain the desired properties. To this end, we recall the relationship between OWL and description logics (DLs), and show how each of the profiles is typically treated in reasoning algorithms.",
"title": ""
}
] | [
{
"docid": "69a11f89a92051631e1c07f2af475843",
"text": "Animal-assisted therapy (AAT) has been practiced for many years and there is now increasing interest in demonstrating its efficacy through research. To date, no known quantitative review of AAT studies has been published; our study sought to fill this gap. We conducted a comprehensive search of articles reporting on AAT in which we reviewed 250 studies, 49 of which met our inclusion criteria and were submitted to meta-analytic procedures. Overall, AAT was associated with moderate effect sizes in improving outcomes in four areas: Autism-spectrum symptoms, medical difficulties, behavioral problems, and emotional well-being. Contrary to expectations, characteristics of participants and studies did not produce differential outcomes. AAT shows promise as an additive to established interventions and future research should investigate the conditions under which AAT can be most helpful.",
"title": ""
},
{
"docid": "115fb4dcd7d5a1240691e430cd107dce",
"text": "Human motion capture data, which are used to animate animation characters, have been widely used in many areas. To satisfy the high-precision requirement, human motion data are captured with a high frequency (120 frames/s) by a high-precision capture system. However, the high frequency and nonlinear structure make the storage, retrieval, and browsing of motion data challenging problems, which can be solved by keyframe extraction. Current keyframe extraction methods do not properly model two important characteristics of motion data, i.e., sparseness and Riemannian manifold structure. Therefore, we propose a new model called joint kernel sparse representation (SR), which is in marked contrast to all current keyframe extraction methods for motion data and can simultaneously model the sparseness and the Riemannian manifold structure. The proposed model completes the SR in a kernel-induced space with a geodesic exponential kernel, whereas the traditional SR cannot model the nonlinear structure of motion data in the Euclidean space. Meanwhile, because of several important modifications to traditional SR, our model can also exploit the relations between joints and solve two problems, i.e., the unreasonable distribution and redundancy of extracted keyframes, which current methods do not solve. Extensive experiments demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "b7dcd24f098965ff757b7ce5f183662b",
"text": "We give an overview of a complex systems approach to large blackouts of electric power transmission systems caused by cascading failure. Instead of looking at the details of particular blackouts, we study the statistics and dynamics of series of blackouts with approximate global models. Blackout data from several countries suggest that the frequency of large blackouts is governed by a power law. The power law makes the risk of large blackouts consequential and is consistent with the power system being a complex system designed and operated near a critical point. Power system overall loading or stress relative to operating limits is a key factor affecting the risk of cascading failure. Power system blackout models and abstract models of cascading failure show critical points with power law behavior as load is increased. To explain why the power system is operated near these critical points and inspired by concepts from self-organized criticality, we suggest that power system operating margins evolve slowly to near a critical point and confirm this idea using a power system model. The slow evolution of the power system is driven by a steady increase in electric loading, economic pressures to maximize the use of the grid, and the engineering responses to blackouts that upgrade the system. Mitigation of blackout risk should account for dynamical effects in complex self-organized critical systems. For example, some methods of suppressing small blackouts could ultimately increase the risk of large blackouts.",
"title": ""
},
{
"docid": "4cbf8dc762813225048edc555a28a0c4",
"text": "The Semantic Web and Linked Data gained traction in the last years. However, the majority of information still is contained in unstructured documents. This can also not be expected to change, since text, images and videos are the natural way how humans interact with information. Semantic structuring on the other hand enables the (semi-)automatic integration, repurposing, rearrangement of information. NLP technologies and formalisms for the integrated representation of unstructured and semantic content (such as RDFa and Microdata) aim at bridging this semantic gap. However, in order for humans to truly benefit from this integration, we need ways to author, visualize and explore unstructured and semantically enriched content in an integrated manner. In this paper, we present the WYSIWYM (What You See is What You Mean) concept, which addresses this issue and formalizes the binding between semantic representation models and UI elements for authoring, visualizing and exploration. With RDFaCE and Pharmer we present and evaluate two complementary showcases implementing the WYSIWYM concept for different application domains.",
"title": ""
},
{
"docid": "d25a3d1a921d78c4e447c8e010647351",
"text": "In the TREC 2005 Spam Evaluation Track, a number of popular spam filters – all owing their heritage to Graham’s A Plan for Spam – did quite well. Machine learning techniques reported elsewhere to perform well were hardly represented in the participating filters, and not represented at all in the better results. A non-traditional technique Prediction by Partial Matching (PPM) – performed exceptionally well, at or near the top of every test. Are the TREC results an anomaly? Is PPM really the best method for spam filtering? How are these results to be reconciled with others showing that methods like Support Vector Machines (SVM) are superior? We address these issues by testing implementations of five different classification methods on the TREC public corpus using the online evaluation methodology introduced in TREC. These results are complemented with cross validation experiments, which facilitate a comparison of the methods considered in the study under different evaluation schemes, and also give insight into the nature and utility of the evaluation regimens themselves. For comparison with previously published results, we also conducted cross validation experiments on the Ling-Spam and PU1 datasets. These tests reveal substantial differences attributable to different test assumptions, in particular batch vs. on-line training and testing, the order of classification, and the method of tokenization. Notwithstanding these differences, the methods that perform well at TREC also perform well using established test methods and corpora. Two previously untested methods – one based on Dynamic Markov Compression and one using logistic regression – compare favorably with competing approaches.",
"title": ""
},
{
"docid": "1f52a93eff0c020564acc986b2fef0e7",
"text": "The performance of a predictive model is overestimated when simply determined on the sample of subjects that was used to construct the model. Several internal validation methods are available that aim to provide a more accurate estimate of model performance in new subjects. We evaluated several variants of split-sample, cross-validation and bootstrapping methods with a logistic regression model that included eight predictors for 30-day mortality after an acute myocardial infarction. Random samples with a size between n = 572 and n = 9165 were drawn from a large data set (GUSTO-I; n = 40,830; 2851 deaths) to reflect modeling in data sets with between 5 and 80 events per variable. Independent performance was determined on the remaining subjects. Performance measures included discriminative ability, calibration and overall accuracy. We found that split-sample analyses gave overly pessimistic estimates of performance, with large variability. Cross-validation on 10% of the sample had low bias and low variability, but was not suitable for all performance measures. Internal validity could best be estimated with bootstrapping, which provided stable estimates with low bias. We conclude that split-sample validation is inefficient, and recommend bootstrapping for estimation of internal validity of a predictive logistic regression model.",
"title": ""
},
{
"docid": "946517ff7728e321804b36c43e3a0da2",
"text": "We are creating an environment for investigating the role of advanced AI in interactive, story-based computer games. This environment is based on the Unreal Tournament (UT) game engine and the Soar AI engine. Unreal provides a 3D virtual environment, while Soar provides a flexible architecture for developing complex AI characters. This paper describes our progress to date, starting with our game, Haunt 2, which is designed so that complex AI characters will be critical to the success (or failure) of the game. It addresses design issues with constructing a plot for an interactive storytelling environment, creating synthetic characters for that environment, and using a story director agent to tell the story with those characters.",
"title": ""
},
{
"docid": "cf9fe52efd734c536d0a7daaf59a9bcd",
"text": "Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.",
"title": ""
},
{
"docid": "dd0f335262aab9aa5adb0ad7d25b80bf",
"text": "We present a framework for adaptive news access, based on machine learning techniques specifically designed for this task. First, we focus on the system's general functionality and system architecture. We then describe the interface and design of two deployed news agents that are part of the described architecture. While the first agent provides personalized news through a web-based interface, the second system is geared towards wireless information devices such as PDAs (personal digital assistants) and cell phones. Based on implicit and explicit user feedback, our agents use a machine learning algorithm to induce individual user models. Motivated by general shortcomings of other user modeling systems for Information Retrieval applications, as well as the specific requirements of news classification, we propose the induction of hybrid user models that consist of separate models for short-term and long-term interests. Furthermore, we illustrate how the described algorithm can be used to address an important issue that has thus far received little attention in the Information Retrieval community: a user's information need changes as a direct result of interaction with information. We empirically evaluate the system's performance based on data collected from regular system users. The goal of the evaluation is not only to understand the performance contributions of the algorithm's individual components, but also to assess the overall utility of the proposed user modeling techniques from a user perspective. Our results provide empirical evidence for the utility of the hybrid user model, and suggest that effective personalization can be achieved without requiring any extra effort from the user.",
"title": ""
},
{
"docid": "5546f93f4c10681edb0fdfe3bf52809c",
"text": "The current applications of neural networks to in vivo medical imaging and signal processing are reviewed. As is evident from the literature neural networks have already been used for a wide variety of tasks within medicine. As this trend is expected to continue this review contains a description of recent studies to provide an appreciation of the problems associated with implementing neural networks for medical imaging and signal processing.",
"title": ""
},
{
"docid": "598fd1fc1d1d6cba7a838c17efe9481b",
"text": "The tens of thousands of high-quality open source software projects on the Internet raise the exciting possibility of studying software development by finding patterns across truly large source code repositories. This could enable new tools for developing code, encouraging reuse, and navigating large projects. In this paper, we build the first giga-token probabilistic language model of source code, based on 352 million lines of Java. This is 100 times the scale of the pioneering work by Hindle et al. The giga-token model is significantly better at the code suggestion task than previous models. More broadly, our approach provides a new “lens” for analyzing software projects, enabling new complexity metrics based on statistical analysis of large corpora. We call these metrics data-driven complexity metrics. We propose new metrics that measure the complexity of a code module and the topical centrality of a module to a software project. In particular, it is possible to distinguish reusable utility classes from classes that are part of a program's core logic based solely on general information theoretic criteria.",
"title": ""
},
{
"docid": "f151c89fecb41e10c6b19ceb659eb163",
"text": "Most organizations have some kind of process-oriented information system that keeps track of business events. Process Mining starts from event logs extracted from these systems in order to discover, analyze, diagnose and improve processes, organizational, social and data structures. Notwithstanding the large number of contributions to the process mining literature over the last decade, the number of studies actually demonstrating the applicability and value of these techniques in practice has been limited. As a consequence, there is a need for real-life case studies suggesting methodologies to conduct process mining analysis and to show the benefits of its application in real-life environments. In this paper we present a methodological framework for a multi-faceted analysis of real-life event logs based on Process Mining. As such, we demonstrate the usefulness and flexibility of process mining techniques to expose organizational inefficiencies in a real-life case study that is centered on the back office process of a large Belgian insurance company. Our analysis shows that process mining techniques constitute an ideal means to tackle organizational challenges by suggesting process improvements and creating a companywide process awareness. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f81ea919846bce6bae4298d8780f9123",
"text": "AIMS AND OBJECTIVES\nTo evaluate the effectiveness of an accessibility-enhanced multimedia informational educational programme in reducing anxiety and increasing satisfaction with the information and materials received by patients undergoing cardiac catheterisation.\n\n\nBACKGROUND\nCardiac catheterisation is one of the most anxiety-provoking invasive procedures for patients. However, informational education using multimedia to inform patients undergoing cardiac catheterisation has not been extensively explored.\n\n\nDESIGN\nA randomised experimental design with three-cohort prospective comparisons.\n\n\nMETHODS\nIn total, 123 consecutive patients were randomly assigned to one of three groups: regular education; (group 1), accessibility-enhanced multimedia informational education (group 2) and instructional digital videodisc education (group 3). Anxiety was measured with Spielberger's State Anxiety Inventory, which was administered at four time intervals: before education (T0), immediately after education (T1), before cardiac catheterisation (T2) and one day after cardiac catheterisation (T3). A satisfaction questionnaire was administrated one day after cardiac catheterisation. Data were collected from May 2009-September 2010 and analysed using descriptive statistics, chi-squared tests, one-way analysis of variance, Scheffe's post hoc test and generalised estimating equations.\n\n\nRESULTS\nAll patients experienced moderate anxiety at T0 to low anxiety at T3. Accessibility-enhanced multimedia informational education patients had significantly lower anxiety levels and felt the most satisfied with the information and materials received compared with patients in groups 1 and 3. A statistically significant difference in anxiety levels was only found at T2 among the three groups (p = 0·004).\n\n\nCONCLUSIONS\nThe findings demonstrate that the accessibility-enhanced multimedia informational education was the most effective informational educational module for informing patients about their upcoming cardiac catheterisation, to reduce anxiety and improve satisfaction with the information and materials received compared with the regular education and instructional digital videodisc education.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nAs the accessibility-enhanced multimedia informational education reduced patient anxiety and improved satisfaction with the information and materials received, it can be adapted to complement patient education in future regular cardiac care.",
"title": ""
},
{
"docid": "ac657141ed547f870ad35d8c8b2ba8f5",
"text": "Induced by “big data,” “topic modeling” has become an attractive alternative to mapping cowords in terms of co-occurrences and co-absences using network techniques. Does topic modeling provide an alternative for co-word mapping in research practices using moderately sized document collections? We return to the word/document matrix using first a single text with a strong argument (“The Leiden Manifesto”) and then upscale to a sample of moderate size (n = 687) to study the pros and cons of the two approaches in terms of the resulting possibilities for making semantic maps that can serve an argument. The results from co-word mapping (using two different routines) versus topic modeling are significantly uncorrelated. Whereas components in the co-word maps can easily be designated, the topic models provide sets of words that are very differently organized. In these samples, the topic models seem to reveal similarities other than semantic ones (e.g., linguistic ones). In other words, topic modeling does not replace co-word mapping in small and medium-sized sets; but the paper leaves open the possibility that topic modeling would work well for the semantic mapping of large sets.",
"title": ""
},
{
"docid": "cb00162e49af450c3e355088fe7817ac",
"text": "The new sensing applications need enhanced computing capabilities to handle the requirements of complex and huge data processing. The Internet of Things (IoT) concept brings processing and communication features to devices. In addition, the Cloud Computing paradigm provides resources and infrastructures for performing the computations and outsourcing the work from the IoT devices. This scenario opens new opportunities for designing advanced IoT-based applications, however, there is still much research to be done to properly gear all the systems for working together. This work proposes a collaborative model and an architecture to take advantage of the available computing resources. The resulting architecture involves a novel network design with different levels which combines sensing and processing capabilities based on the Mobile Cloud Computing (MCC) paradigm. An experiment is included to demonstrate that this approach can be used in diverse real applications. The results show the flexibility of the architecture to perform complex computational tasks of advanced applications.",
"title": ""
},
{
"docid": "f066cb3e2fc5ee543e0cc76919b261eb",
"text": "Eco-labels are part of a new wave of environmental policy that emphasizes information disclosure as a tool to induce environmentally friendly behavior by both firms and consumers. Little consensus exists as to whether eco-certified products are actually better than their conventional counterparts. This paper seeks to understand the link between eco-certification and product quality. We use data from three leading wine rating publications (Wine Advocate, Wine Enthusiast, and Wine Spectator) to assess quality for 74,148 wines produced in California between 1998 and 2009. Our results indicate that eco-certification is associated with a statistically significant increase in wine quality rating.",
"title": ""
},
{
"docid": "55ec669a67b88ff0b6b88f1fa6408df9",
"text": "This paper proposes low overhead training techniques for a wireless communication system equipped with a Multifunctional Reconfigurable Antenna (MRA) capable of dynamically changing beamwidth and beam directions. A novel microelectromechanical system (MEMS) MRA antenna is presented with radiation patterns (generated using complete electromagnetic full-wave analysis) which are used to quantify the communication link performance gains. In particular, it is shown that using the proposed Exhaustive Training at Reduced Frequency (ETRF) consistently results in a reduction in training overhead. It is also demonstrated that further reduction in training overhead is possible using statistical or MUSIC-based training schemes. Bit Error Rate (BER) and capacity simulations are carried out using an MRA, which can tilt its radiation beam into one of Ndir = 4 or 8 directions with variable beamwidth (≈2π/Ndir). The performance of each training scheme is quantified for OFDM systems operating in frequency selective channels with and without Line of Sight (LoS). We observe 6 dB of gain at BER = 10-4 and 6 dB improvement in capacity (at capacity = 6 bits/sec/subcarrier) are achievable for an MRA with Ndir= 8 as compared to omni directional antennas using ETRF scheme in a LoS environment.",
"title": ""
},
{
"docid": "b06f1e94f0ba22828044030c3a1fe691",
"text": "BACKGROUND\nThe use of opioids for chronic non-cancer pain has increased in the United States since state laws were relaxed in the late 1990s. These policy changes occurred despite scanty scientific evidence that chronic use of opioids was safe and effective.\n\n\nMETHODS\nWe examined opiate prescriptions and dosing patterns (from computerized databases, 1996 to 2002), and accidental poisoning deaths attributable to opioid use (from death certificates, 1995 to 2002), in the Washington State workers' compensation system.\n\n\nRESULTS\nOpioid prescriptions increased only modestly between 1996 and 2002. However, prescriptions for the most potent opioids (Schedule II), as a percentage of all scheduled opioid prescriptions (II, III, and IV), increased from 19.3% in 1996 to 37.2% in 2002. Among long-acting opioids, the average daily morphine equivalent dose increased by 50%, to 132 mg/day. Thirty-two deaths were definitely or probably related to accidental overdose of opioids. The majority of deaths involved men (84%) and smokers (69%).\n\n\nCONCLUSIONS\nThe reasons for escalating doses of the most potent opioids are unknown, but it is possible that tolerance or opioid-induced abnormal pain sensitivity may be occurring in some workers who use opioids for chronic pain. Opioid-related deaths in this population may be preventable through use of prudent guidelines regarding opioid use for chronic pain.",
"title": ""
},
{
"docid": "3668a5a14ea32471bd34a55ff87b45b5",
"text": "This paper proposes a method to separate polyphonic music signal into signals of each musical instrument by NMF: Non-negative Matrix Factorization based on preservation of spectrum envelope. Sound source separation is taken as a fundamental issue in music signal processing and NMF is becoming common to solve it because of its versatility and compatibility with music signal processing. Our method bases on a common feature of harmonic signal: spectrum envelopes of musical signal in close pitches played by the harmonic music instrument would be similar. We estimate power spectrums of each instrument by NMF with restriction to synchronize spectrum envelope of bases which are allocated to all possible center frequencies of each instrument. This manipulation means separation of components which refers to tones of each instrument and realizes both of separation without pre-training and separation of signal including harmonic and non-harmonic sound. We had an experiment to decompose mixture sound signal of MIDI instruments into each instrument and evaluated the result by SNR of single MIDI instrument sound signals and separated signals. As a result, SNR of lead guitar and drums approximately marked 3.6 and 6.0 dB and showed significance of our method.",
"title": ""
},
{
"docid": "12d565f0aaa6960e793b96f1c26cb103",
"text": "The new western Mode 5 IFF (Identification Foe or Friend) system is introduced. Based on analysis of signal features and format characteristics of Mode 5, a new signal detection method using phase and Amplitude correlation is put forward. This method utilizes odd and even channels to separate the signal, and then the separated signals are performed correlation with predefined mask. Through detecting preamble, the detection of Mode 5 signal is implemented. Finally, simulation results show the validity of the proposed method.",
"title": ""
}
] | scidocsrr |
de58318e961209968774fcda1d76bc73 | Forecasting of ozone concentration in smart city using deep learning | [
{
"docid": "961348dd7afbc1802d179256606bdbb8",
"text": "Class imbalance is among the most persistent complications which may confront the traditional supervised learning task in real-world applications. The problem occurs, in the binary case, when the number of instances in one class significantly outnumbers the number of instances in the other class. This situation is a handicap when trying to identify the minority class, as the learning algorithms are not usually adapted to such characteristics. The approaches to deal with the problem of imbalanced datasets fall into two major categories: data sampling and algorithmic modification. Cost-sensitive learning solutions incorporating both the data and algorithm level approaches assume higher misclassification costs with samples in the minority class and seek to minimize high cost errors. Nevertheless, there is not a full exhaustive comparison between those models which can help us to determine the most appropriate one under different scenarios. The main objective of this work is to analyze the performance of data level proposals against algorithm level proposals focusing in cost-sensitive models and versus a hybrid procedure that combines those two approaches. We will show, by means of a statistical comparative analysis, that we cannot highlight an unique approach among the rest. This will lead to a discussion about the data intrinsic characteristics of the imbalanced classification problem which will help to follow new paths that can lead to the improvement of current models mainly focusing on class overlap and dataset shift in imbalanced classification. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "4e9b1776436950ed25353a8731eda76a",
"text": "This paper presents the design and implementation of VibeBin, a low-cost, non-intrusive and easy-to-install waste bin level detection system. Recent popularity of Internet-of-Things (IoT) sensors has brought us unprecedented opportunities to enable a variety of new services for monitoring and controlling smart buildings. Indoor waste management is crucial to a healthy environment in smart buildings. Measuring the waste bin fill-level helps building operators schedule garbage collection more responsively and optimize the quantity and location of waste bins. Existing systems focus on directly and intrusively measuring the physical quantities of the garbage (weight, height, volume, etc.) or its appearance (image), and therefore require careful installation, laborious calibration or labeling, and can be costly. Our system indirectly measures fill-level by sensing the changes in motor-induced vibration characteristics on the outside surface of waste bins. VibeBin exploits the physical nature of vibration resonance of the waste bin and the garbage within, and learns the vibration features of different fill-levels through a few garbage collection (emptying) cycles in a completely unsupervised manner. VibeBin identifies vibration features of different fill-levels by clustering historical vibration samples based on a custom distance metric which measures the dissimilarity between two samples. We deploy our system on eight waste bins of different types and sizes, and show that under normal usage and real waste, it can deliver accurate level measurements after just 3 garbage collection cycles. The average F-score (harmonic mean of precision and recall) of measuring empty, half, and full levels achieves 0.912. A two-week deployment also shows that the false positive and false negative events are satisfactorily rare.",
"title": ""
},
{
"docid": "91a56dbdefc08d28ff74883ec10a5d6e",
"text": "A truly autonomous guided vehicle (AGV) must sense its surrounding environment and react accordingly. In order to maneuver an AGV autonomously, it has to overcome navigational and collision avoidance problems. Previous AGV control systems have relied on hand-coded algorithms for processing sensor information. An intelligent distributed fuzzy logic control system (IDFLCS) has been implemented in a mecanum wheeled AGV system in order to achieve improved reliability and to reduce complexity of the development of control systems. Fuzzy logic controllers have been used to achieve robust control of mechatronic systems by fusing multiple signals from noisy sensors, integrating the representation of human knowledge and implementing behaviour-based control using if-then rules. This paper presents an intelligent distributed controller that implements fuzzy logic on an AGV that uses four independently driven mecanum wheels, incorporating laser, inertial and ultrasound sensors. Distributed control system, fuzzy control strategy, navigation and motion control of such an AGV are presented.",
"title": ""
},
{
"docid": "1c94dec13517bedf7a8140e207e0a6d9",
"text": "Art and anatomy were particularly closely intertwined during the Renaissance period and numerous painters and sculptors expressed themselves in both fields. Among them was Michelangelo Buonarroti (1475-1564), who is renowned for having produced some of the most famous of all works of art, the frescoes on the ceiling and on the wall behind the altar of the Sistine Chapel in Rome. Recently, a unique association was discovered between one of Michelangelo's most celebrated works (The Creation of Adam fresco) and the Divine Proportion/Golden Ratio (GR) (1.6). The GR can be found not only in natural phenomena but also in a variety of human-made objects and works of art. Here, using Image-Pro Plus 6.0 software, we present mathematical evidence that Michelangelo also used the GR when he painted Saint Bartholomew in the fresco of The Last Judgment, which is on the wall behind the altar. This discovery will add a new dimension to understanding the great works of Michelangelo Buonarroti.",
"title": ""
},
{
"docid": "a1f93bedbddefb63cd7ab7d030b4f3ee",
"text": "This paper presents a novel fitness and preventive health care system with a flexible and easy to deploy platform. By using embedded wearable sensors in combination with a smartphone as an aggregator, both daily activities as well as specific gym exercises and their counts are recognized and logged. The detection is achieved with minimal impact on the system’s resources through the use of customized 3D inertial sensors embedded in fitness accessories with built-in pre-processing of the initial 100Hz data. It provides a flexible re-training of the classifiers on the phone which allows deploying the system swiftly. A set of evaluations shows a classification performance that is comparable to that of state of the art activity recognition, and that the whole setup is suitable for daily usage with minimal impact on the phone’s resources.",
"title": ""
},
{
"docid": "ddb66de70b76427f30fae713f176bc64",
"text": "Identifying whether an utterance is a statement, question, greeting, and so forth is integral to effective automatic understanding of natural dialog. Little is known, however, about how such dialog acts (DAs) can be automatically classified in truly natural conversation. This study asks whether current approaches, which use mainly word information, could be improved by adding prosodic information. The study is based on more than 1000 conversations from the Switchboard corpus. DAs were hand-annotated, and prosodic features (duration, pause, F0, energy, and speaking rate) were automatically extracted for each DA. In training, decision trees based on these features were inferred; trees were then applied to unseen test data to evaluate performance. Performance was evaluated for prosody models alone, and after combining the prosody models with word information--either from true words or from the output of an automatic speech recognizer. For an overall classification task, as well as three subtasks, prosody made significant contributions to classification. Feature-specific analyses further revealed that although canonical features (such as F0 for questions) were important, less obvious features could compensate if canonical features were removed. Finally, in each task, integrating the prosodic model with a DA-specific statistical language model improved performance over that of the language model alone, especially for the case of recognized words. Results suggest that DAs are redundantly marked in natural conversation, and that a variety of automatically extractable prosodic features could aid dialog processing in speech applications.",
"title": ""
},
{
"docid": "a774567d957ed0ea209b470b8eced563",
"text": "The vulnerability of the nervous system to advancing age is all too often manifest in neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. In this review article we describe evidence suggesting that two dietary interventions, caloric restriction (CR) and intermittent fasting (IF), can prolong the health-span of the nervous system by impinging upon fundamental metabolic and cellular signaling pathways that regulate life-span. CR and IF affect energy and oxygen radical metabolism, and cellular stress response systems, in ways that protect neurons against genetic and environmental factors to which they would otherwise succumb during aging. There are multiple interactive pathways and molecular mechanisms by which CR and IF benefit neurons including those involving insulin-like signaling, FoxO transcription factors, sirtuins and peroxisome proliferator-activated receptors. These pathways stimulate the production of protein chaperones, neurotrophic factors and antioxidant enzymes, all of which help cells cope with stress and resist disease. A better understanding of the impact of CR and IF on the aging nervous system will likely lead to novel approaches for preventing and treating neurodegenerative disorders.",
"title": ""
},
{
"docid": "d8253659de704969cd9c30b3ea7543c5",
"text": "Frequent itemset mining is an important step of association rules mining. Traditional frequent itemset mining algorithms have certain limitations. For example Apriori algorithm has to scan the input data repeatedly, which leads to high I/O load and low performance, and the FP-Growth algorithm is limited by the capacity of computer's inner stores because it needs to build a FP-tree and mine frequent itemset on the basis of the FP-tree in memory. With the coming of the Big Data era, these limitations are becoming more prominent when confronted with mining large-scale data. In this paper, DPBM, a distributed matrix-based pruning algorithm based on Spark, is proposed to deal with frequent itemset mining. DPBM can greatly reduce the amount of candidate itemset by introducing a novel pruning technique for matrix-based frequent itemset mining algorithm, an improved Apriori algorithm which only needs to scan the input data once. In addition, each computer node reduces greatly the memory usage by implementing DPBM under a latest distributed environment-Spark, which is a lightning-fast distributed computing. The experimental results show that DPBM have better performance than MapReduce-based algorithms for frequent itemset mining in terms of speed and scalability.",
"title": ""
},
{
"docid": "d8c64128c89f3a291b410eefbf00dab2",
"text": "We review the prospects of using yeasts and microalgae as sources of cheap oils that could be used for biodiesel. We conclude that yeast oils, the cheapest of the oils producible by heterotrophic microorganisms, are too expensive to be viable alternatives to the major commodity plant oils. Algal oils are similarly unlikely to be economic; the cheapest form of cultivation is in open ponds which then requires a robust, fast-growing alga that can withstand adventitious predatory protozoa or contaminating bacteria and, at the same time, attain an oil content of at least 40% of the biomass. No such alga has yet been identified. However, we note that if the prices of the major plant oils and crude oil continue to rise in the future, as they have done over the past 12 months, then algal lipids might just become a realistic alternative within the next 10 to 15 years. Better prospects would, however, be to focus on algae as sources of polyunsaturated fatty acids.",
"title": ""
},
{
"docid": "227d8ad4000e6e1d9fd1aa6bff8ed64c",
"text": "Recently, speed sensorless control of Induction Motor (IM) drives received great attention to avoid the different problems associated with direct speed sensors. Among different rotor speed estimation techniques, Model Reference Adaptive System (MRAS) schemes are the most common strategies employed due to their relative simplicity and low computational effort. In this paper a novel adaptation mechanism is proposed which replaces normally used conventional Proportional-Integral (PI) controller in MRAS adaptation mechanism by a Fractional Order PI (FOPI) controller. The performance of two adaptation mechanism controllers has been verified through simulation results using MATLAB/SIMULINK software. It is seen that the performance of the induction motor has improved when FOPI controller is used in place of classical PI controller.",
"title": ""
},
{
"docid": "4a4a868d64a653fac864b5a7a531f404",
"text": "Metropolitan areas have come under intense pressure to respond to federal mandates to link planning of land use, transportation, and environmental quality; and from citizen concerns about managing the side effects of growth such as sprawl, congestion, housing affordability, and loss of open space. The planning models used by Metropolitan Planning Organizations (MPOs) were generally not designed to address these questions, creating a gap in the ability of planners to systematically assess these issues. UrbanSim is a new model system that has been developed to respond to these emerging requirements, and has now been applied in three metropolitan areas. This paper describes the model system and its application to Eugene-Springfield, Oregon.",
"title": ""
},
{
"docid": "2d78a4c914c844a3f28e8f3b9f65339f",
"text": "The availability of abundant data posts a challenge to integrate static customer data and longitudinal behavioral data to improve performance in customer churn prediction. Usually, longitudinal behavioral data are transformed into static data before being included in a prediction model. In this study, a framework with ensemble techniques is presented for customer churn prediction directly using longitudinal behavioral data. A novel approach called the hierarchical multiple kernel support vector machine (H-MK-SVM) is formulated. A three phase training algorithm for the H-MK-SVM is developed, implemented and tested. The H-MK-SVM constructs a classification function by estimating the coefficients of both static and longitudinal behavioral variables in the training process without transformation of the longitudinal behavioral data. The training process of the H-MK-SVM is also a feature selection and time subsequence selection process because the sparse non-zero coefficients correspond to the variables selected. Computational experiments using three real-world databases were conducted. Computational results using multiple criteria measuring performance show that the H-MK-SVM directly using longitudinal behavioral data performs better than currently available classifiers.",
"title": ""
},
{
"docid": "ce9345c367db70de1dec07cad0343f71",
"text": "Techniques for digital image tampering are becoming widespread for the availability of low cost technology in which the image could be easily manipulated. Copy-move forgery is one of the tampering techniques that are frequently used and has recently received significant attention. But the existing methods, including block-matching and key point matching based methods, are not able to be used to solve the problem of detecting image forgery in both flat region and non-flat region. In this paper, combining the thinking of these two types of methods, we develop a SURF-based method to tackle this problem. In addition to the determination of forgeries in non-flat region through key point features, our method can be used to detect flat region in images in an effective way, and extract FMT features after blocking the region. By using matching algorithms of similar blocked images, image forgeries in flat region can be determined, which results in the completing of the entire image tamper detection. Experimental results are presented to demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "ffe6edef11daef1db0c4aac77bed7a23",
"text": "MPI is a well-established technology that is used widely in high-performance computing environment. However, setting up an MPI cluster can be challenging and time-consuming. This paper tackles this challenge by using modern containerization technology, which is Docker, and container orchestration technology, which is Docker Swarm mode, to automate the MPI cluster setup and deployment. We created a ready-to-use solution for developing and deploying MPI programs in a cluster of Docker containers running on multiple machines, orchestrated with Docker Swarm mode, to perform high computation tasks. We explain the considerations when creating Docker image that will be instantiated as MPI nodes, and we describe the steps needed to set up a fully connected MPI cluster as Docker containers running in a Docker Swarm mode. Our goal is to give the rationale behind our solution so that others can adapt to different system requirements. All pre-built Docker images, source code, documentation, and screencasts are publicly available.",
"title": ""
},
{
"docid": "02ad9bef7d38af14c01ceb6efec8078b",
"text": "Weakness of the will may lead to ineffective goal striving in the sense that people lacking willpower fail to get started, to stay on track, to select instrumental means, and to act efficiently. However, using a simple self-regulation strategy (i.e., forming implementation intentions or making if–then plans) can get around this problem by drastically improving goal striving on the spot. After an overview of research investigating how implementation intentions work, I will discuss how people can use implementation intentions to overcome potential hindrances to successful goal attainment. Extensive empirical research shows that implementation intentions help people to meet their goals no matter whether these hindrances originate from within (e.g., lack of cognitive capabilities) or outside the person (i.e., difficult social situations). Moreover, I will report recent research demonstrating that implementation intentions can even be used to control impulsive cognitive, affective, and behavioral responses that interfere with one’s focal goal striving. In ending, I will present various new lines of implementation intention research, and raise a host of open questions that still deserve further empirical and theoretical analysis.",
"title": ""
},
{
"docid": "aa70864ca9d2285eebe5b46f7c283ebe",
"text": "The centerpiece of this thesis is a new processing paradigm for exploiting instruction level parallelism. This paradigm, called the multiscalar paradigm, splits the program into many smaller tasks, and exploits fine-grain parallelism by executing multiple, possibly (control and/or data) dependent tasks in parallel using multiple processing elements. Splitting the instruction stream at statically determined boundaries allows the compiler to pass substantial information about the tasks to the hardware. The processing paradigm can be viewed as extensions of the superscalar and multiprocessing paradigms, and shares a number of properties of the sequential processing model and the dataflow processing model. The multiscalar paradigm is easily realizable, and we describe an implementation of the multiscalar paradigm, called the multiscalar processor. The central idea here is to connect multiple sequential processors, in a decoupled and decentralized manner, to achieve overall multiple issue. The multiscalar processor supports speculative execution, allows arbitrary dynamic code motion (facilitated by an efficient hardware memory disambiguation mechanism), exploits communication localities, and does all of these with hardware that is fairly straightforward to build. Other desirable aspects of the implementation include decentralization of the critical resources, absence of wide associative searches, and absence of wide interconnection/data paths.",
"title": ""
},
{
"docid": "000652922defcc1d500a604d43c8f77b",
"text": "The problem of object recognition has not yet been solved in its general form. The most successful approach to it so far relies on object models obtained by training a statistical method on visual features obtained from camera images. The images must necessarily come from huge visual datasets, in order to circumvent all problems related to changing illumination, point of view, etc. We hereby propose to also consider, in an object model, a simple model of how a human being would grasp that object (its affordance). This knowledge is represented as a function mapping visual features of an object to the kinematic features of a hand while grasping it. The function is practically enforced via regression on a human grasping database. After describing the database (which is publicly available) and the proposed method, we experimentally evaluate it, showing that a standard object classifier working on both sets of features (visual and motor) has a significantly better recognition rate than that of a visual-only classifier.",
"title": ""
},
{
"docid": "6162ad3612b885add014bd09baa5f07a",
"text": "The Neural Bag-of-Words (NBOW) model performs classification with an average of the input word vectors and achieves an impressive performance. While the NBOW model learns word vectors targeted for the classification task it does not explicitly model which words are important for given task. In this paper we propose an improved NBOW model with this ability to learn task specific word importance weights. The word importance weights are learned by introducing a new weighted sum composition of the word vectors. With experiments on standard topic and sentiment classification tasks, we show that (a) our proposed model learns meaningful word importance for a given task (b) our model gives best accuracies among the BOW approaches. We also show that the learned word importance weights are comparable to tf-idf based word weights when used as features in a BOW SVM classifier.",
"title": ""
},
{
"docid": "29d1502c7edea13ce67aa1e283dc8488",
"text": "An explosive growth in the volume, velocity, and variety of the data available on the Internet has been witnessed recently. The data originated frommultiple types of sources including mobile devices, sensors, individual archives, social networks, Internet of Things, enterprises, cameras, software logs, health data has led to one of the most challenging research issues of the big data era. In this paper, Knowle—an online news management system upon semantic link network model is introduced. Knowle is a news event centrality data management system. The core elements of Knowle are news events on the Web, which are linked by their semantic relations. Knowle is a hierarchical data system, which has three different layers including the bottom layer (concepts), the middle layer (resources), and the top layer (events). The basic blocks of the Knowle system—news collection, resources representation, semantic relations mining, semantic linking news events are given. Knowle does not require data providers to follow semantic standards such as RDF or OWL, which is a semantics-rich self-organized network. It reflects various semantic relations of concepts, news, and events. Moreover, in the case study, Knowle is used for organizing andmining health news, which shows the potential on forming the basis of designing and developing big data analytics based innovation framework in the health domain. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b16407fc67058110b334b047bcfea9ac",
"text": "In Educational Psychology (1997/1926), Vygotsky pleaded for a realistic approach to children’s literature. He is, among other things, critical of Chukovsky’s story “Crocodile” and maintains that this story deals with nonsense and gibberish, without social relevance. This approach Vygotsky would leave soon, and, in Psychology of Art (1971/1925), in which he develops his theory of art, he talks about connections between nursery rhymes and children’s play, exactly as the story of Chukovsky had done with the following argument: By dragging a child into a topsy-turvy world, we help his intellect work and his perception of reality. In his book Imagination and Creativity in Childhood (1995/1930), Vygotsky goes further and develops his theory of creativity. The book describes how Vygotsky regards the creative process of the human consciousness, the link between emotion and thought, and the role of the imagination. To Vygotsky, this brings to the fore the issue of the link between reality and imagination, and he discusses the issue of reproduction and creativity, both of which relate to the entire scope of human activity. Interpretations of Vygotsky in the 1990s have stressed the role of literature and the development of a cultural approach to psychology and education. It has been overlooked that Vygotsky started his career with work on the psychology of art. In this article, I want to describe Vygotsky’s theory of creativity and how he developed it. He started with a realistic approach to imagination, and he ended with a dialectical attitude to imagination. Criticism of Chukovsky’s “Crocodile” In 1928, the “Crocodile” story was forbidden. It was written by Korney Chukovsky (1882–1969). In his book From Two to Five Years, there is a chapter with the title “Struggle for the Fairy-Tale,” in which he attacks his antagonists, the pedologists, whom he described as a miserable group of theoreticans who studied children’s reading and maintained that the children of the proletarians needed neither “fairy-tales nor toys, or songs” (Chukovsky, 1975, p. 129). He describes how the pedologists let the word imagination become an abuse and how several stories were forbidden, for example, “Crocodile.” One of the slogans of the antagonists of fantasy literature was chukovskies, a term meaning of anthropomorphism and being bourgeois. In 1928, Krupskaja criticized Chukovky, the same year as Stalin was in power. Krupskaja maintained that the content of children’s literature ought to be concrete and realistic to inspire the children to be conscious communists. As an atheist, she was against everything that smelled of mysticism and religion. She pointed out, in an article in Pravda, that “Crocodile” did not live up to the demands that one could make on children’s literature. Many authors, however, came to Chukovsky’s defense, among them A. Tolstoy (Chukovsky, 1975). Ten years earlier in 1918, only a few months after the October Revolution, the first demands were made that children’s literature should be put in the service of communist ideology. It was necessary to replace old bourgeois books, and new writers were needed. In the first attempts to create a new children’s literature, a significant role was played by Maksim Gorky. His ideal was realistic literature with such moral ideals as heroism and optimism. Creativity Research Journal Copyright 2003 by 2003, Vol. 15, Nos. 2 & 3, 245–251 Lawrence Erlbaum Associates, Inc. Vygotsky’s Theory of Creativity Gunilla Lindqvist University of Karlstad Correspondence and requests for reprints should be sent to Gunilla Lindqvist, Department of Educational Sciences, University of Karlstad, 65188 Karlstad, Sweden. E-mail: gunilla.lindqvist@",
"title": ""
},
{
"docid": "c684de3eb8a370e3444aee3a37319b46",
"text": "We present an extended version of our work on the design and implementation of a reference model of the human body, the Master Motor Map (MMM) which should serve as a unifying framework for capturing human motions, their representation in standard data structures and formats as well as their reproduction on humanoid robots. The MMM combines the definition of a comprehensive kinematics and dynamics model of the human body with 104 DoF including hands and feet with procedures and tools for unified capturing of human motions. We present online motion converters for the mapping of human and object motions to the MMM model while taking into account subject specific anthropométrie data as well as for the mapping of MMM motion to a target robot kinematics. Experimental evaluation of the approach performed on VICON motion recordings demonstrate the benefits of the MMM as an important step towards standardized human motion representation and mapping to humanoid robots.",
"title": ""
}
] | scidocsrr |
a28e7cdf3a39ff608c0d62daf4268019 | Grounding Topic Models with Knowledge Bases | [
{
"docid": "8d8dc05c2de34440eb313503226f7e99",
"text": "Disambiguating entity references by annotating them with unique ids from a catalog is a critical step in the enrichment of unstructured content. In this paper, we show that topic models, such as Latent Dirichlet Allocation (LDA) and its hierarchical variants, form a natural class of models for learning accurate entity disambiguation models from crowd-sourced knowledge bases such as Wikipedia. Our main contribution is a semi-supervised hierarchical model called Wikipedia-based Pachinko Allocation Model} (WPAM) that exploits: (1) All words in the Wikipedia corpus to learn word-entity associations (unlike existing approaches that only use words in a small fixed window around annotated entity references in Wikipedia pages), (2) Wikipedia annotations to appropriately bias the assignment of entity labels to annotated (and co-occurring unannotated) words during model learning, and (3) Wikipedia's category hierarchy to capture co-occurrence patterns among entities. We also propose a scheme for pruning spurious nodes from Wikipedia's crowd-sourced category hierarchy. In our experiments with multiple real-life datasets, we show that WPAM outperforms state-of-the-art baselines by as much as 16% in terms of disambiguation accuracy.",
"title": ""
},
{
"docid": "f6121f69419a074b657bb4a0324bae4a",
"text": "Latent Dirichlet allocation (LDA) is a popular topic modeling technique for exploring hidden topics in text corpora. Increasingly, topic modeling needs to scale to larger topic spaces and use richer forms of prior knowledge, such as word correlations or document labels. However, inference is cumbersome for LDA models with prior knowledge. As a result, LDA models that use prior knowledge only work in small-scale scenarios. In this work, we propose a factor graph framework, Sparse Constrained LDA (SC-LDA), for efficiently incorporating prior knowledge into LDA. We evaluate SC-LDA’s ability to incorporate word correlation knowledge and document label knowledge on three benchmark datasets. Compared to several baseline methods, SC-LDA achieves comparable performance but is significantly faster. 1 Challenge: Leveraging Prior Knowledge in Large-scale Topic Models Topic models, such as Latent Dirichlet Allocation (Blei et al., 2003, LDA), have been successfully used for discovering hidden topics in text collections. LDA is an unsupervised model—it requires no annotation—and discovers, without any supervision, the thematic trends in a text collection. However, LDA’s lack of supervision can lead to disappointing results. Often, the hidden topics learned by LDA fail to make sense to end users. Part of the problem is that the objective function of topic models does not always correlate with human judgments of topic quality (Chang et al., 2009). Therefore, it’s often necessary to incorporate prior knowledge into topic models to improve the model’s performance. Recent work has also shown that by interactive human feedback can improve the quality and stability of topics (Hu and Boyd-Graber, 2012; Yang et al., 2015). Information about documents (Ramage et al., 2009) or words (Boyd-Graber et al., 2007) can improve LDA’s topics. In addition to its occasional inscrutability, scalability can also hamper LDA’s adoption. Conventional Gibbs sampling—the most widely used inference for LDA—scales linearly with the number of topics. Moreover, accurate training usually takes many sampling passes over the dataset. Therefore, for large datasets with millions or even billions of tokens, conventional Gibbs sampling takes too long to finish. For standard LDA, recently introduced fast sampling methods (Yao et al., 2009; Li et al., 2014; Yuan et al., 2015) enable industrial applications of topic modeling to search engines and online advertising, where capturing the “long tail” of infrequently used topics requires large topic spaces. For example, while typical LDA models in academic papers have up to 103 topics, industrial applications with 105–106 topics are common (Wang et al., 2014). Moreover, scaling topic models to many topics can also reveal the hierarchical structure of topics (Downey et al., 2015). Thus, there is a need for topic models that can both benefit from rich prior information and that can scale to large datasets. However, existing methods for improving scalability focus on topic models without prior information. To rectify this, we propose a factor graph model that encodes a potential function over the hidden topic variables, encouraging topics consistent with prior knowledge. The factor model representation admits an efficient sampling algorithm that takes advantage of the model’s sparsity. We show that our method achieves comparable performance but runs significantly faster than baseline methods, enabling models to discover models with many topics enriched by prior knowledge. 2 Efficient Algorithm for Incorporating Knowledge into LDA In this section, we introduce the factor model for incorporating prior knowledge and show how to efficiently use Gibbs sampling for inference. 2.1 Background: LDA and SparseLDA A statistical topic model represents words in documents in a collection D as mixtures of T topics, which are multinomials over a vocabulary of size V . In LDA, each document d is associated with a multinomial distribution over topics, θd. The probability of a word type w given topic z is φw|z . The multinomial distributions θd and φz are drawn from Dirichlet distributions: α and β are the hyperparameters for θ and φ. We represent the document collection D as a sequence of words w, and topic assignments as z. We use symmetric priors α and β in the model and experiment, but asymmetric priors are easily encoded in the models (Wallach et al., 2009). Discovering the latent topic assignments z from observed words w requires inferring the the posterior distribution P (z|w). Griffiths and Steyvers (2004) propose using collapsed Gibbs sampling. The probability of a topic assignment z = t in document d given an observed word type w and the other topic assignments z− is P (z = t|z−, w) ∝ (nd,t + α) nw,t + β",
"title": ""
},
{
"docid": "ef31d8b3cd83aeb109f62fde4cd8bc8a",
"text": "Many existing knowledge bases (KBs), including Freebase, Yago, and NELL, rely on a fixed ontology, given as an input to the system, which defines the data to be cataloged in the KB, i.e., a hierarchy of categories and relations between them. The system then extracts facts that match the predefined ontology. We propose an unsupervised model that jointly learns a latent ontological structure of an input corpus, and identifies facts from the corpus that match the learned structure. Our approach combines mixed membership stochastic block models and topic models to infer a structure by jointly modeling text, a latent concept hierarchy, and latent semantic relationships among the entities mentioned in the text. As a case study, we apply the model to a corpus of Web documents from the software domain, and evaluate the accuracy of the various components of the learned ontology.",
"title": ""
}
] | [
{
"docid": "814aa0089ce9c5839d028d2e5aca450d",
"text": "Espresso is a document-oriented distributed data serving platform that has been built to address LinkedIn's requirements for a scalable, performant, source-of-truth primary store. It provides a hierarchical document model, transactional support for modifications to related documents, real-time secondary indexing, on-the-fly schema evolution and provides a timeline consistent change capture stream. This paper describes the motivation and design principles involved in building Espresso, the data model and capabilities exposed to clients, details of the replication and secondary indexing implementation and presents a set of experimental results that characterize the performance of the system along various dimensions.\n When we set out to build Espresso, we chose to apply best practices in industry, already published works in research and our own internal experience with different consistency models. Along the way, we built a novel generic distributed cluster management framework, a partition-aware change- capture pipeline and a high-performance inverted index implementation.",
"title": ""
},
{
"docid": "75b2f12152526a0fbc5648261faca1cc",
"text": "Traditional automated essay scoring systems rely on carefully designed features to evaluate and score essays. The performance of such systems is tightly bound to the quality of the underlying features. However, it is laborious to manually design the most informative features for such a system. In this paper, we develop an approach based on recurrent neural networks to learn the relation between an essay and its assigned score, without any feature engineering. We explore several neural network models for the task of automated essay scoring and perform some analysis to get some insights of the models. The results show that our best system, which is based on long short-term memory networks, outperforms a strong baseline by 5.6% in terms of quadratic weighted Kappa, without requiring any feature engineering.",
"title": ""
},
{
"docid": "44e135418dc6480366bb5679b62bc4f9",
"text": "There is growing interest regarding the role of the right inferior frontal gyrus (RIFG) during a particular form of executive control referred to as response inhibition. However, tasks used to examine neural activity at the point of response inhibition have rarely controlled for the potentially confounding effects of attentional demand. In particular, it is unclear whether the RIFG is specifically involved in inhibitory control, or is involved more generally in the detection of salient or task relevant cues. The current fMRI study sought to clarify the role of the RIFG in executive control by holding the stimulus conditions of one of the most popular response inhibition tasks-the Stop Signal Task-constant, whilst varying the response that was required on reception of the stop signal cue. Our results reveal that the RIFG is recruited when important cues are detected, regardless of whether that detection is followed by the inhibition of a motor response, the generation of a motor response, or no external response at all.",
"title": ""
},
{
"docid": "bc35d87706c66350f4cec54befc9acc2",
"text": "This paper presents a new improved term frequency/inverse document frequency (TF-IDF) approach which uses confidence, support and characteristic words to enhance the recall and precision of text classification. Synonyms defined by a lexicon are processed in the improved TF-IDF approach. We detailedly discuss and analyze the relationship among confidence, recall and precision. The experiments based on science and technology gave promising results that the new TF-IDF approach improves the precision and recall of text classification compared with the conventional TF-IDF approach.",
"title": ""
},
{
"docid": "88a4ab49e7d3263d5d6470d123b6e74b",
"text": "Graph databases have gained renewed interest in the last years, due to its applications in areas such as the Semantic Web and Social Networks Analysis. We study the problem of querying graph databases, and, in particular, the expressiveness and complexity of evaluation for several general-purpose query languages, such as the regular path queries and its extensions with conjunctions and inverses. We distinguish between two semantics for these languages. The first one, based on simple paths, easily leads to intractability, while the second one, based on arbitrary paths, allows tractable evaluation for an expressive family of languages.\n We also study two recent extensions of these languages that have been motivated by modern applications of graph databases. The first one allows to treat paths as first-class citizens, while the second one permits to express queries that combine the topology of the graph with its underlying data.",
"title": ""
},
{
"docid": "625b96d21cb9ff05785aa34c98c567ff",
"text": "The number of mitoses per tissue area gives an important aggressiveness indication of the invasive breast carcinoma. However, automatic mitosis detection in histology images remains a challenging problem. Traditional methods either employ hand-crafted features to discriminate mitoses from other cells or construct a pixel-wise classifier to label every pixel in a sliding window way. While the former suffers from the large shape variation of mitoses and the existence of many mimics with similar appearance, the slow speed of the later prohibits its use in clinical practice. In order to overcome these shortcomings, we propose a fast and accurate method to detect mitosis by designing a novel deep cascaded convolutional neural network, which is composed of two components. First, by leveraging the fully convolutional neural network, we propose a coarse retrieval model to identify and locate the candidates of mitosis while preserving a high sensitivity. Based on these candidates, a fine discrimination model utilizing knowledge transferred from cross-domain is developed to further single out mitoses from hard mimics. Our approach outperformed other methods by a large margin in 2014 ICPR MITOS-ATYPIA challenge in terms of detection accuracy. When compared with the state-of-the-art methods on the 2012 ICPR MITOSIS data (a smaller and less challenging dataset), our method achieved comparable or better results with a roughly 60 times faster speed.",
"title": ""
},
{
"docid": "10646c29afc4cc5c0a36ca508aabb41a",
"text": "As high-resolution fingerprint images are becoming more common, the pores have been found to be one of the promising candidates in improving the performance of automated fingerprint identification systems (AFIS). This paper proposes a deep learning approach towards pore extraction. It exploits the feature learning and classification capability of convolutional neural networks (CNNs) to detect pores on fingerprints. Besides, this paper also presents a unique affine Fourier moment-matching (AFMM) method of matching and fusing the scores obtained for three different fingerprint features to deal with both local and global linear distortions. Combining the two aforementioned contributions, an EER of 3.66% can be observed from the experimental results.",
"title": ""
},
{
"docid": "0a0ca1f866a4be1a3f264c6e3c888adc",
"text": "Printed circuit board (PCB) windings are convenient for many applications given their ease of manufacture, high repeatability, and low profile. In many cases, the use of multistranded litz wires is appropriate due to the rated power, frequency range, and efficiency constraints. This paper proposes a manufacturing technique and a semianalytical loss model for PCB windings using planar litz structure to obtain a similar ac loss reduction to that of conventional windings of round wires with litz structure. Different coil prototypes have been tested in several configurations to validate the proposal.",
"title": ""
},
{
"docid": "c77042cb1a8255ac99ebfbc74979c3c6",
"text": "Machine translation systems require semantic knowledge and grammatical understanding. Neural machine translation (NMT) systems often assume this information is captured by an attention mechanism and a decoder that ensures fluency. Recent work has shown that incorporating explicit syntax alleviates the burden of modeling both types of knowledge. However, requiring parses is expensive and does not explore the question of what syntax a model needs during translation. To address both of these issues we introduce a model that simultaneously translates while inducing dependency trees. In this way, we leverage the benefits of structure while investigating what syntax NMT must induce to maximize performance. We show that our dependency trees are 1. language pair dependent and 2. improve translation quality.",
"title": ""
},
{
"docid": "ecfb05d557ebe524e3821fcf6ce0f985",
"text": "This paper presents a novel active-source-pump (ASP) circuit technique to significantly lower the ESD sensitivity of ultrathin gate inputs in advanced sub-90nm CMOS technologies. As demonstrated by detailed experimental analysis, an ESD design window expansion of more than 100% can be achieved. This revives conventional ESD solutions for ultrasensitive input protection also enabling low-capacitance RF protection schemes with a high ESD design flexibility at IC-level. ASP IC application examples, and the impact of ASP on normal RF operation performance, are discussed.",
"title": ""
},
{
"docid": "3301a0cf26af8d4d8c7b2b9d56cec292",
"text": "Reading comprehension (RC)—in contrast to information retrieval—requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. However, existing RC datasets and tasks are dominated by questions that can be solved by selecting answers using superficial information (e.g., local context similarity or global term frequency); they thus fail to test for the essential integrative aspect of RC. To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. These tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard RC models struggle on the tasks presented here. We provide an analysis of the dataset and the challenges it presents.",
"title": ""
},
{
"docid": "7beeea42e8f5d0f21ea418aa7f433ab9",
"text": "This application note describes principles and uses for continuous ST segment monitoring. It also provides a detailed description of the ST Analysis algorithm implemented in the multi-lead ST/AR (ST and Arrhythmia) algorithm, and an assessment of the ST analysis algorithm's performance.",
"title": ""
},
{
"docid": "d540250c51e97622a10bcb29f8fde956",
"text": "With many advantages of rectangular waveguide and microstrip lines, substrate integrated waveguide (SIW) can be used for design of planar waveguide-like slot antenna. However, the bandwidth of this kind of antenna structure is limited. In this work, a parasitic dipole is introduced and coupled with the SIW radiate slot. The results have indicated that the proposed technique can enhance the bandwidth of the SIW slot antenna significantly. The measured bandwidth of fabricated antenna prototype is about 19%, indicating about 115% bandwidth enhancement than the ridged substrate integrated waveguide (RSIW) slot antenna.",
"title": ""
},
{
"docid": "d35bc5ef2ea3ce24bbba87f65ae93a25",
"text": "Fog computing, complementary to cloud computing, has recently emerged as a new paradigm that extends the computing infrastructure from the center to the edge of the network. This article explores the design of a fog computing orchestration framework to support IoT applications. In particular, we focus on how the widely adopted cloud computing orchestration framework can be customized to fog computing systems. We first identify the major challenges in this procedure that arise due to the distinct features of fog computing. Then we discuss the necessary adaptations of the orchestration framework to accommodate these challenges.",
"title": ""
},
{
"docid": "4e2bed31e5406e30ae59981fa8395d5b",
"text": "Asynchronous Learning Networks (ALNs) make the process of collaboration more transparent, because a transcript of conference messages can be used to assess individual roles and contributions and the collaborative process itself. This study considers three aspects of ALNs: the design; the quality of the resulting knowledge construction process; and cohesion, role and power network structures. The design is evaluated according to the Social Interdependence Theory of Cooperative Learning. The quality of the knowledge construction process is evaluated through Content Analysis; and the network structures are analyzed using Social Network Analysis of the response relations among participants during online discussions. In this research we analyze data from two three-monthlong ALN academic university courses: a formal, structured, closed forum and an informal, nonstructured, open forum. We found that in the structured ALN, the knowledge construction process reached a very high phase of critical thinking and developed cohesive cliques. The students took on bridging and triggering roles, while the tutor had relatively little power. In the non-structured ALN, the knowledge construction process reached a low phase of cognitive activity; few cliques were constructed; most of the students took on the passive role of teacher-followers; and the tutor was at the center of activity. These differences are statistically significant. We conclude that a well-designed ALN develops significant, distinct cohesion, and role and power structures lead the knowledge construction process to high phases of critical thinking.",
"title": ""
},
{
"docid": "7d5215dc3213b13748f97aa21898e86e",
"text": "Several tasks in computer vision and machine learning can be modeled as MRF-MAP inference problems. Using higher order potentials to model complex dependencies can significantly improve the performance. The problem can often be modeled as minimizing a sum of submodular (SoS) functions. Since sum of submodular functions is also submodular, existing submodular function minimization (SFM) techniques can be employed for optimal inference in polynomial time [1], [2]. These techniques, though oblivious to the clique sizes, have limited scalability in the number of pixels. On the other hand, state of the art algorithms in computer vision [3], [47] can handle problems with a large number of pixels but fail to scale to large clique sizes. In this paper, we adapt two SFM algorithms [1], [5], to exploit the sum of submodular structure, thereby helping them scale to large number of pixels while maintaining scalability with large clique sizes. Our ideas are general enough and can be extended to adapt other existing SFM algorithms as well. Our experiments on computer vision problems demonstrate that our approach can easily scale up to clique sizes of 300, thereby unlocking the usage of really large sized cliques for MRF-MAP inference problems.",
"title": ""
},
{
"docid": "07e03419430b7ea8ca3c7b02f9340d46",
"text": "Recently, [2] presented a security attack on the privacy-preserving outsourcing scheme for biometric identification proposed in [1]. In [2], the author claims that the scheme CloudBI-II proposed in [1] can be broken under the collusion case. That is, when the cloud server acts as a user to submit a number of identification requests, CloudBI-II is no longer secure. In this technical report, we will explicitly show that the attack method proposed in [2] doesn’t work in fact.",
"title": ""
},
{
"docid": "b97c9e8238f74539e8a17dcffecdd35f",
"text": "This paper presents a novel approach to the task of automatic music genre classification which is based on multiple feature vectors and ensemble of classifiers. Multiple feature vectors are extracted from a single music piece. First, three 30-second music segments, one from the beginning, one from the middle and one from end part of a music piece are selected and feature vectors are extracted from each segment. Individual classifiers are trained to account for each feature vector extracted from each music segment. At the classification, the outputs provided by each individual classifier are combined through simple combination rules such as majority vote, max, sum and product rules, with the aim of improving music genre classification accuracy. Experiments carried out on a large dataset containing more than 3,000 music samples from ten different Latin music genres have shown that for the task of automatic music genre classification, the features extracted from the middle part of the music provide better results than using the segments from the beginning or end part of the music. Furthermore, the proposed ensemble approach, which combines the multiple feature vectors, provides better accuracy than using single classifiers and any individual music segment.",
"title": ""
},
{
"docid": "ef0625150b0eb6ae68a214256e3db50d",
"text": "Undergraduate engineering students require a practical application of theoretical concepts learned in classrooms in order to appropriate a complete management of them. Our aim is to assist students to learn control systems theory in an engineering context, through the design and implementation of a simple and low cost ball and plate plant. Students are able to apply mathematical and computational modelling tools, control systems design, and real-time software-hardware implementation while solving a position regulation problem. The whole project development is presented and may be assumed as a guide for replicate results or as a basis for a new design approach. In both cases, we end up in a tool available to implement and assess control strategies experimentally.",
"title": ""
},
{
"docid": "72fec6dc287b0aa9aea97a22268c1125",
"text": "Given a symmetric matrix what is the nearest correlation matrix, that is, the nearest symmetric positive semidefinite matrix with unit diagonal? This problem arises in the finance industry, where the correlations are between stocks. For distance measured in two weighted Frobenius norms we characterize the solution using convex analysis. We show how the modified alternating projections method can be used to compute the solution for the more commonly used of the weighted Frobenius norms. In the finance application the original matrix has many zero or negative eigenvalues; we show that for a certain class of weights the nearest correlation matrix has correspondingly many zero eigenvalues and that this fact can be exploited in the computation.",
"title": ""
}
] | scidocsrr |
7681eb74db675553642af04857196151 | Innovation, openness & platform control | [
{
"docid": "c7d629a83de44e17a134a785795e26d8",
"text": "How can firms profitably give away free products? This paper provides a novel answer and articulates tradeoffs in a space of information product design. We introduce a formal model of two-sided network externalities based in textbook economics—a mix of Katz & Shapiro network effects, price discrimination, and product differentiation. Externality-based complements, however, exploit a different mechanism than either tying or lock-in even as they help to explain many recent strategies such as those of firms selling operating systems, Internet browsers, games, music, and video. The model presented here argues for three simple but useful results. First, even in the absence of competition, a firm can rationally invest in a product it intends to give away into perpetuity. Second, we identify distinct markets for content providers and end consumers and show that either can be a candidate for a free good. Third, product coupling across markets can increase consumer welfare even as it increases firm profits. The model also generates testable hypotheses on the size and direction of network effects while offering insights to regulators seeking to apply antitrust law to network markets. ACKNOWLEDGMENTS: We are grateful to participants of the 1999 Workshop on Information Systems and Economics, the 2000 Association for Computing Machinery SIG E-Commerce, the 2000 International Conference on Information Systems, the 2002 Stanford Institute for Theoretical Economics (SITE) workshop on Internet Economics, the 2003 Insitut D’Economie Industrielle second conference on “The Economics of the Software and Internet Industries,” as well as numerous participants at university seminars. We wish to thank Tom Noe for helpful observations on oligopoly markets, Lones Smith, Kai-Uwe Kuhn, and Jovan Grahovac for corrections and model generalizations, Jeff MacKie-Mason for valuable feedback on model design and bundling, and Hal Varian for helpful comments on firm strategy and model implications. Frank Fisher provided helpful advice on and knowledge of the Microsoft trial. Jean Tirole provided useful suggestions and examples, particularly in regard to credit card markets. Paul Resnick proposed the descriptive term “internetwork” externality to describe two-sided network externalities. Tom Eisenmann provided useful feedback and examples. We also thank Robert Gazzale, Moti Levi, and Craig Newmark for their many helpful observations. This research has been supported by NSF Career Award #IIS 9876233. For an earlier version of the paper that also addresses bundling and competition, please see “Information Complements, Substitutes, and Strategic Product Design,” November 2000, http://ssrn.com/abstract=249585.",
"title": ""
},
{
"docid": "686045e2dae16aba16c26b8ccd499731",
"text": "It has been argued that platform technology owners cocreate business value with other firms in their platform ecosystems by encouraging complementary invention and exploiting indirect network effects. In this study, we examine whether participation in an ecosystem partnership improves the business performance of small independent software vendors (ISVs) in the enterprise software industry and how appropriability mechanisms influence the benefits of partnership. By analyzing the partnering activities and performance indicators of a sample of 1,210 small ISVs over the period 1996–2004, we find that joining a major platform owner’s platform ecosystem is associated with an increase in sales and a greater likelihood of issuing an initial public offering (IPO). In addition, we show that these impacts are greater when ISVs have greater intellectual property rights or stronger downstream capabilities. This research highlights the value of interoperability between software products, and stresses that value cocreation and appropriation are not mutually exclusive strategies in interfirm collaboration.",
"title": ""
}
] | [
{
"docid": "e451eacd16b0dda85c0f576554b26d15",
"text": "The major challenge faced by the fifth generation (5G) mobile network is higher spectral efficiency and massive connectivity, i.e., the target spectrum efficiency is 3 times over 4G, and the target connection density is one million devices per square kilometer. These requirements are difficult to be satisfied with orthogonal multiple access (OMA) schemes. Non-orthogonal multiple access (NOMA) has thus been proposed as a promising candidate to address some of the challenges for 5G. In this paper, a comprehensive survey of different candidate NOMA schemes for 5G is presented, where the usage scenarios of 5G and the application requirements for NOMA are firstly discussed. A general framework of NOMA scheme is established and the features of typical NOMA schemes are analyzed and compared. We focus on the recent progress and challenge of NOMA in standardization of international telecommunication union (ITU), and 3rd generation partnership project (3GPP). In addition, prototype development and future research directions are also provided respectively.",
"title": ""
},
{
"docid": "6a3fe7de176dcca7da54d927d8901e38",
"text": "We demonstrate how two Novint Falcons, inexpensive commercially available haptic devices, can be modified to a create a reconfigurable five-degreeof-freedom (5-DOF) haptic device for less than $500 (including the two Falcons). The device is intended as an educational tool to allow a broader range of students to experience force and torque feedback, rather than the 3-DOF force feedback typical of inexpensive devices. We also explain how to implement a 5-DOF force/torque control system with gravity compensation.",
"title": ""
},
{
"docid": "8ff6325fed2f8f3323833f6ac446eb3d",
"text": "Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this `1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we extend MKL to arbitrary norms. We devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary norms, that is `p-norms with p ≥ 1. This interleaved optimization is much faster than the commonly used wrapper approaches, as demonstrated on several data sets. A theoretical analysis and an experiment on controlled artificial data shed light on the appropriateness of sparse, non-sparse and `∞-norm MKL in various scenarios. Importantly, empirical applications of `p-norm MKL to three real-world problems from computational biology show that non-sparse MKL achieves accuracies that surpass the state-of-the-art. Data sets, source code to reproduce the experiments, implementations of the algorithms, and further information are available at http://doc.ml.tu-berlin.de/nonsparse_mkl/.",
"title": ""
},
{
"docid": "a2c9c975788253957e6bbebc94eb5a4b",
"text": "The implementation of Substrate Integrated Waveguide (SIW) structures in paper-based inkjet-printed technology is presented in this paper for the first time. SIW interconnects and components have been fabricated and tested on a multilayer paper substrate, which permits to implement low-cost and eco-friendly structures. A broadband and compact ridge substrate integrated slab waveguide covering the entire UWB frequency range is proposed and preliminarily verified. SIW structures appear particularly suitable for implementation on paper, due to the possibility to easily realize multilayered topologies and conformal geometries.",
"title": ""
},
{
"docid": "3baf11f31351e92c7ff56b066434ae2c",
"text": "Unlike images which are represented in regular dense grids, 3D point clouds are irregular and unordered, hence applying convolution on them can be difficult. In this paper, we extend the dynamic filter to a new convolution operation, named PointConv. PointConv can be applied on point clouds to build deep convolutional networks. We treat convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions. With respect to a given point, the weight functions are learned with multi-layer perceptron networks and the density functions through kernel density estimation. A novel reformulation is proposed for efficiently computing the weight functions, which allowed us to dramatically scale up the network and significantly improve its performance. The learned convolution kernel can be used to compute translation-invariant and permutation-invariant convolution on any point set in the 3D space. Besides, PointConv can also be used as deconvolution operators to propagate features from a subsampled point cloud back to its original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that deep convolutional neural networks built on PointConv are able to achieve state-ofthe-art on challenging semantic segmentation benchmarks on 3D point clouds. Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.",
"title": ""
},
{
"docid": "49a66c642e8804122e0200429de21c45",
"text": "As a type of Ehlers-Danlos syndrome (EDS), vascular EDs (vEDS) is typified by a number of characteristic facial features (eg, large eyes, small chin, sunken cheeks, thin nose and lips, lobeless ears). However, vEDs does not typically display hypermobility of the large joints and skin hyperextensibility, which are features typical of the more common forms of EDS. Thus, colonic perforation or aneurysm rupture may be the first presentation of the disease. Because both complications are associated with a reduced life expectancy for individuals with this condition, an awareness of the clinical features of vEDS is important. Here, we describe the treatment of vEDS lacking the characteristic facial attributes in a 24-year-old healthy man who presented to the emergency room with abdominal pain. Enhanced computed tomography revealed diverticula and perforation in the sigmoid colon. The lesion of the sigmoid colon perforation was removed, and Hartmann procedure was performed. During the surgery, the control of bleeding was required because of vascular fragility. Subsequent molecular and genetic analysis was performed based on the suspected diagnosis of vEDS. These analyses revealed reduced type III collagen synthesis in cultured skin fibroblasts and identified a previously undocumented mutation in the gene for a1 type III collagen, confirming the diagnosis of vEDS. After eliciting a detailed medical profile, we learned his mother had a history of extensive bruising since childhood and idiopathic hematothorax. Both were prescribed oral celiprolol. One year after admission, the patient was free of recurrent perforation. This case illustrates an awareness of the clinical characteristics of vEDS and the family history is important because of the high mortality from this condition even in young people. Importantly, genetic assays could help in determining the surgical procedure and offer benefits to relatives since this condition is inherited in an autosomal dominant manner.",
"title": ""
},
{
"docid": "90bb7ab528877c922758b44b102bf4e8",
"text": "Labeling training data is increasingly the largest bottleneck in deploying machine learning systems. We present Snorkel, a first-of-its-kind system that enables users to train state-of- the-art models without hand labeling any training data. Instead, users write labeling functions that express arbitrary heuristics, which can have unknown accuracies and correlations. Snorkel denoises their outputs without access to ground truth by incorporating the first end-to-end implementation of our recently proposed machine learning paradigm, data programming. We present a flexible interface layer for writing labeling functions based on our experience over the past year collaborating with companies, agencies, and research labs. In a user study, subject matter experts build models 2.8× faster and increase predictive performance an average 45.5% versus seven hours of hand labeling. We study the modeling tradeoffs in this new setting and propose an optimizer for automating tradeoff decisions that gives up to 1.8× speedup per pipeline execution. In two collaborations, with the U.S. Department of Veterans Affairs and the U.S. Food and Drug Administration, and on four open-source text and image data sets representative of other deployments, Snorkel provides 132% average improvements to predictive performance over prior heuristic approaches and comes within an average 3.60% of the predictive performance of large hand-curated training sets.",
"title": ""
},
{
"docid": "3830c568e6b9b56bab1c971d2a99757c",
"text": "Lagrangian theory provides a diverse set of tools for continuous motion analysis. Existing work shows the applicability of Lagrangian method for video analysis in several aspects. In this paper we want to utilize the concept of Lagrangian measures to detect violent scenes. Therefore we propose a local feature based on the SIFT algorithm that incooperates appearance and Lagrangian based motion models. We will show that the temporal interval of the used motion information is a crucial aspect and study its influence on the classification performance. The proposed LaSIFT feature outperforms other state-of-the-art local features, in particular in uncontrolled realistic video data. We evaluate our algorithm with a bag-of-word approach. The experimental results show a significant improvement over the state-of-the-art on current violent detection datasets, i.e. Crowd Violence, Hockey Fight.",
"title": ""
},
{
"docid": "90fe763855ca6c4fabe4f9d042d5c61a",
"text": "While learning models of intuitive physics is an increasingly active area of research, current approaches still fall short of natural intelligences in one important regard: they require external supervision, such as explicit access to physical states, at training and sometimes even at test times. Some authors have relaxed such requirements by supplementing the model with an handcrafted physical simulator. Still, the resulting methods are unable to automatically learn new complex environments and to understand physical interactions within them. In this work, we demonstrated for the first time learning such predictors directly from raw visual observations and without relying on simulators. We do so in two steps: first, we learn to track mechanically-salient objects in videos using causality and equivariance, two unsupervised learning principles that do not require auto-encoding. Second, we demonstrate that the extracted positions are sufficient to successfully train visual motion predictors that can take the underlying environment into account. We validate our predictors on synthetic datasets; then, we introduce a new dataset, ROLL4REAL, consisting of real objects rolling on complex terrains (pool table, elliptical bowl, and random height-field). We show that in all such cases it is possible to learn reliable extrapolators of the object trajectories from raw videos alone, without any form of external supervision and with no more prior knowledge than the choice of a convolutional neural network architecture.",
"title": ""
},
{
"docid": "45a98a82d462d8b12445cbe38f20849d",
"text": "Proliferative verrucous leukoplakia (PVL) is an aggressive form of oral leukoplakia that is persistent, often multifocal, and refractory to treatment with a high risk of recurrence and malignant transformation. This article describes the clinical aspects and histologic features of a case that demonstrated the typical behavior pattern in a long-standing, persistent lesion of PVL of the mandibular gingiva and that ultimately developed into squamous cell carcinoma. Prognosis is poor for this seemingly harmless-appearing white lesion of the oral mucosa.",
"title": ""
},
{
"docid": "1f7f0b82bf5822ee51313edfd1cb1593",
"text": "With the promise of meeting future capacity demands, 3-D massive-MIMO/full dimension multiple-input-multiple-output (FD-MIMO) systems have gained much interest in recent years. Apart from the huge spectral efficiency gain, 3-D massive-MIMO/FD-MIMO systems can also lead to significant reduction of latency, simplified multiple access layer, and robustness to interference. However, in order to completely extract the benefits of the system, accurate channel state information is critical. In this paper, a channel estimation method based on direction of arrival (DoA) estimation is presented for 3-D millimeter wave massive-MIMO orthogonal frequency division multiplexing (OFDM) systems. To be specific, the DoA is estimated using estimation of signal parameter via rotational invariance technique method, and the root mean square error of the DoA estimation is analytically characterized for the corresponding MIMO-OFDM system. An ergodic capacity analysis of the system in the presence of DoA estimation error is also conducted, and an optimum power allocation algorithm is derived. Furthermore, it is shown that the DoA-based channel estimation achieves a better performance than the traditional linear minimum mean squared error estimation in terms of ergodic throughput and minimum chordal distance between the subspaces of the downlink precoders obtained from the underlying channel and the estimated channel.",
"title": ""
},
{
"docid": "a239e75cb06355884f65f041e215b902",
"text": "BACKGROUND\nNecrotizing enterocolitis (NEC) and nosocomial sepsis are associated with increased morbidity and mortality in preterm infants. Through prevention of bacterial migration across the mucosa, competitive exclusion of pathogenic bacteria, and enhancing the immune responses of the host, prophylactic enteral probiotics (live microbial supplements) may play a role in reducing NEC and associated morbidity.\n\n\nOBJECTIVES\nTo compare the efficacy and safety of prophylactic enteral probiotics administration versus placebo or no treatment in the prevention of severe NEC and/or sepsis in preterm infants.\n\n\nSEARCH STRATEGY\nFor this update, searches were made of MEDLINE (1966 to October 2010), EMBASE (1980 to October 2010), the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, Issue 2, 2010), and abstracts of annual meetings of the Society for Pediatric Research (1995 to 2010).\n\n\nSELECTION CRITERIA\nOnly randomized or quasi-randomized controlled trials that enrolled preterm infants < 37 weeks gestational age and/or < 2500 g birth weight were considered. Trials were included if they involved enteral administration of any live microbial supplement (probiotics) and measured at least one prespecified clinical outcome.\n\n\nDATA COLLECTION AND ANALYSIS\nStandard methods of the Cochrane Collaboration and its Neonatal Group were used to assess the methodologic quality of the trials, data collection and analysis.\n\n\nMAIN RESULTS\nSixteen eligible trials randomizing 2842 infants were included. Included trials were highly variable with regard to enrollment criteria (i.e. birth weight and gestational age), baseline risk of NEC in the control groups, timing, dose, formulation of the probiotics, and feeding regimens. Data regarding extremely low birth weight infants (ELBW) could not be extrapolated. In a meta-analysis of trial data, enteral probiotics supplementation significantly reduced the incidence of severe NEC (stage II or more) (typical RR 0.35, 95% CI 0.24 to 0.52) and mortality (typical RR 0.40, 95% CI 0.27 to 0.60). There was no evidence of significant reduction of nosocomial sepsis (typical RR 0.90, 95% CI 0.76 to 1.07). The included trials reported no systemic infection with the probiotics supplemental organism. The statistical test of heterogeneity for NEC, mortality and sepsis was insignificant.\n\n\nAUTHORS' CONCLUSIONS\nEnteral supplementation of probiotics prevents severe NEC and all cause mortality in preterm infants. Our updated review of available evidence supports a change in practice. More studies are needed to assess efficacy in ELBW infants and assess the most effective formulation and dose to be utilized.",
"title": ""
},
{
"docid": "11c903f0dea5895a4f14c5625aa1554b",
"text": "Contemporary mobile devices are the result of an evolution process, during which computational and networking capabilities have been continuously pushed to keep pace with the constantly growing workload requirements. This has allowed devices such as smartphones, tablets, and personal digital assistants to perform increasingly complex tasks, up to the point of efficiently replacing traditional options such as desktop computers and notebooks. However, due to their portability and size, these devices are more prone to theft, to become compromised, or to be exploited for attacks and other malicious activity. The need for investigation of the aforementioned incidents resulted in the creation of the Mobile Forensics (MF) discipline. MF, a sub-domain of digital forensics, is specialized in extracting and processing evidence from mobile devices in such a way that attacking entities and actions are identified and traced. Beyond its primary research interest on evidence acquisition from mobile devices, MF has recently expanded its scope to encompass the organized and advanced evidence representation and analysis of future malicious entity behavior. Nonetheless, data acquisition still remains its main focus. While the field is under continuous research activity, new concepts such as the involvement of cloud computing in the MF ecosystem and the evolution of enterprise mobile solutions—particularly mobile device management and bring your own device—bring new opportunities and issues to the discipline. The current article presents the research conducted within the MF ecosystem during the last 7 years, identifies the gaps, and highlights the differences from past research directions, and addresses challenges and open issues in the field.",
"title": ""
},
{
"docid": "6b3abd92478a641d992ed4f4f08f52d5",
"text": "In this article, we consider the robust estimation of a location parameter using Mestimators. We propose here to couple this estimation with the robust scale estimate proposed in [Dahyot and Wilson, 2006]. The resulting procedure is then completely unsupervised. It is applied to camera motion estimation and moving object detection in videos. Experimental results on different video materials show the adaptability and the accuracy of this new robust approach.",
"title": ""
},
{
"docid": "860894abbbafdcb71178cb9ddd173970",
"text": "Twitter is useful in a situation of disaster for communication, announcement, request for rescue and so on. On the other hand, it causes a negative by-product, spreading rumors. This paper describe how rumors have spread after a disaster of earthquake, and discuss how can we deal with them. We first investigated actual instances of rumor after the disaster. And then we attempted to disclose characteristics of those rumors. Based on the investigation we developed a system which detects candidates of rumor from twitter and then evaluated it. The result of experiment shows the proposed algorithm can find rumors with acceptable accuracy.",
"title": ""
},
{
"docid": "7ddf5c53b9ee56cb92c67253f495aafd",
"text": "Two-way arrays or matrices are often not enough to represent all the information in the data and standard two-way analysis techniques commonly applied on matrices may fail to find the underlying structures in multi-modal datasets. Multiway data analysis has recently become popular as an exploratory analysis tool in discovering the structures in higher-order datasets, where data have more than two modes. We provide a review of significant contributions in the literature on multiway models, algorithms as well as their applications in diverse disciplines including chemometrics, neuroscience, social network analysis, text mining and computer vision.",
"title": ""
},
{
"docid": "b2db6db73699ecc66f33e2f277cf055b",
"text": "In this paper, we develop a new approach of spatially supervised recurrent convolutional neural networks for visual object tracking. Our recurrent convolutional network exploits the history of locations as well as the distinctive visual features learned by the deep neural networks. Inspired by recent bounding box regression methods for object detection, we study the regression capability of Long Short-Term Memory (LSTM) in the temporal domain, and propose to concatenate high-level visual features produced by convolutional networks with region information. In contrast to existing deep learning based trackers that use binary classification for region candidates, we use regression for direct prediction of the tracking locations both at the convolutional layer and at the recurrent unit. Our experimental results on challenging benchmark video tracking datasets show that our tracker is competitive with state-of-the-art approaches while maintaining low computational cost.",
"title": ""
},
{
"docid": "f29e5dae294434aa54ad2419e457b1eb",
"text": "Person re-identification aims to match images of the same person across disjoint camera views, which is a challenging problem in video surveillance. The major challenge of this task lies in how to preserve the similarity of the same person against large variations caused by complex backgrounds, mutual occlusions and different illuminations, while discriminating the different individuals. In this paper, we present a novel deep ranking model with feature learning and fusion by learning a large adaptive margin between the intra-class distance and inter-class distance to solve the person re-identification problem. Specifically, we organize the training images into a batch of pairwise samples. Treating these pairwise samples as inputs, we build a novel part-based deep convolutional neural network (CNN) to learn the layered feature representations by preserving a large adaptive margin. As a result, the final learned model can effectively find out the matched target to the anchor image among a number of candidates in the gallery image set by learning discriminative and stable feature representations. Overcoming the weaknesses of conventional fixed-margin loss functions, our adaptive margin loss function is more appropriate for the dynamic feature space. On four benchmark datasets, PRID2011, Market1501, CUHK01 and 3DPeS, we extensively conduct comparative evaluations to demonstrate the advantages of the proposed method over the state-of-the-art approaches in person re-identification.",
"title": ""
},
{
"docid": "7838934c12f00f987f6999460fc38ca1",
"text": "The Internet has fostered an unconventional and powerful style of collaboration: \"wiki\" web sites, where every visitor has the power to become an editor. In this paper we investigate the dynamics of Wikipedia, a prominent, thriving wiki. We make three contributions. First, we introduce a new exploratory data analysis tool, the history flow visualization, which is effective in revealing patterns within the wiki context and which we believe will be useful in other collaborative situations as well. Second, we discuss several collaboration patterns highlighted by this visualization tool and corroborate them with statistical analysis. Third, we discuss the implications of these patterns for the design and governance of online collaborative social spaces. We focus on the relevance of authorship, the value of community surveillance in ameliorating antisocial behavior, and how authors with competing perspectives negotiate their differences.",
"title": ""
},
{
"docid": "9cb703cf5394a77bd15c0ad356928f04",
"text": "Studies were undertaken to evaluate locally available subtrates for use in a culture medium for Phytophthora infestans (Mont.) de Bary employing a protocol similar to that used for the preparation of rye A agar. Test media preparations were assessed for growth, sporulation, oospore formation, and long-term storage of P. infestans. Media prepared from grains and fresh produce available in Thailand and Asian countries such as black bean (BB), red kidney bean (RKB), black sesame (BSS), sunflower (SFW) and sweet corn supported growth and sporulation of representative isolates compared with rye A, V8 and oat meal media. Oospores were successfully formed on BB and RKB media supplemented with β-sitosterol. The BB, RKB, BSS and SFW media maintained viable fungal cultures with sporulation ability for 8 months, similar to the rye A medium. Three percent and 33% of 135 isolates failed to grow on V8 and SFW media, respectively.",
"title": ""
}
] | scidocsrr |
9e94a07f70d58bc9c62a0aa9cd109816 | Next-Generation Machine Learning for Biological Networks | [
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
}
] | [
{
"docid": "7c057b63c525a03ad2f40f625b6157e3",
"text": "As the selection of products and services becomes profuse in the technology market, it is often the delighting user experience (UX) that differentiates a successful product from the competitors. Product development is no longer about implementing features and testing their usability, but understanding users' daily lives and evaluating if a product resonates with the in-depth user needs. Although UX is a widely adopted term in industry, the tools for evaluating UX in product development are still inadequate. Based on industrial case studies and the latest research on UX evaluation, this workshop forms a model for aligning the used UX evaluation methods to product development processes. The results can be used to advance the state of \"putting UX evaluation into practice\".",
"title": ""
},
{
"docid": "96a79bc015e34db18e32a31bfaaace36",
"text": "We consider social media as a promising tool for public health, focusing on the use of Twitter posts to build predictive models about the forthcoming influence of childbirth on the behavior and mood of new mothers. Using Twitter posts, we quantify postpartum changes in 376 mothers along dimensions of social engagement, emotion, social network, and linguistic style. We then construct statistical models from a training set of observations of these measures before and after the reported childbirth, to forecast significant postpartum changes in mothers. The predictive models can classify mothers who will change significantly following childbirth with an accuracy of 71%, using observations about their prenatal behavior, and as accurately as 80-83% when additionally leveraging the initial 2-3 weeks of postnatal data. The study is motivated by the opportunity to use social media to identify mothers at risk of postpartum depression, an underreported health concern among large populations, and to inform the design of low-cost, privacy-sensitive early-warning systems and intervention programs aimed at promoting wellness postpartum.",
"title": ""
},
{
"docid": "14d68a45e54b07efb15ef950ba92d7bc",
"text": "We propose a novel hierarchical approach for text-to-image synthesis by inferring semantic layout. Instead of learning a direct mapping from text to image, our algorithm decomposes the generation process into multiple steps, in which it first constructs a semantic layout from the text by the layout generator and converts the layout to an image by the image generator. The proposed layout generator progressively constructs a semantic layout in a coarse-to-fine manner by generating object bounding boxes and refining each box by estimating object shapes inside the box. The image generator synthesizes an image conditioned on the inferred semantic layout, which provides a useful semantic structure of an image matching with the text description. Our model not only generates semantically more meaningful images, but also allows automatic annotation of generated images and user-controlled generation process by modifying the generated scene layout. We demonstrate the capability of the proposed model on challenging MS-COCO dataset and show that the model can substantially improve the image quality, interpretability of output and semantic alignment to input text over existing approaches.",
"title": ""
},
{
"docid": "6aa4b1064833af0c91d16af28136e7e4",
"text": "Recently, supervised classification has been shown to work well for the task of speech separation. We perform an in-depth evaluation of such techniques as a front-end for noise-robust automatic speech recognition (ASR). The proposed separation front-end consists of two stages. The first stage removes additive noise via time-frequency masking. The second stage addresses channel mismatch and the distortions introduced by the first stage; a non-linear function is learned that maps the masked spectral features to their clean counterpart. Results show that the proposed front-end substantially improves ASR performance when the acoustic models are trained in clean conditions. We also propose a diagonal feature discriminant linear regression (dFDLR) adaptation that can be performed on a per-utterance basis for ASR systems employing deep neural networks and HMM. Results show that dFDLR consistently improves performance in all test conditions. Surprisingly, the best average results are obtained when dFDLR is applied to models trained using noisy log-Mel spectral features from the multi-condition training set. With no channel mismatch, the best results are obtained when the proposed speech separation front-end is used along with multi-condition training using log-Mel features followed by dFDLR adaptation. Both these results are among the best on the Aurora-4 dataset.",
"title": ""
},
{
"docid": "d88ce9c09fdfa0c1ea023ce08183f39b",
"text": "The development of the Internet in recent years has made it possible and useful to access many different information systems anywhere in the world to obtain information. While there is much research on the integration of heterogeneous information systems, most commercial systems stop short of the actual integration of available data. Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.\n This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data fusion, namely, uncertain and conflicting data values. We give an overview and classification of different ways of fusing data and present several techniques based on standard and advanced operators of the relational algebra and SQL. Finally, the article features a comprehensive survey of data integration systems from academia and industry, showing if and how data fusion is performed in each.",
"title": ""
},
{
"docid": "6c106d560d8894d941851386d96afe2b",
"text": "Cooperative vehicular networks require the exchange of positioning and basic status information between neighboring nodes to support higher layer protocols and applications, including active safety applications. The information exchange is based on the periodic transmission/reception of 1-hop broadcast messages on the so called control channel. The dynamic adaptation of the transmission parameters of such messages will be key for the reliable and efficient operation of the system. On one hand, congestion control protocols need to be applied to control the channel load, typically through the adaptation of the transmission parameters based on certain channel load metrics. On the other hand, awareness control protocols are also required to adequately support cooperative vehicular applications. Such protocols typically adapt the transmission parameters of periodic broadcast messages to ensure each vehicle's capacity to detect, and possibly communicate, with the relevant vehicles and infrastructure nodes present in its local neighborhood. To date, congestion and awareness control protocols have been normally designed and evaluated separately, although both will be required for the reliable and efficient operation of the system. To this aim, this paper proposes and evaluates INTERN, a new control protocol that integrates two congestion and awareness control processes. The simulation results obtained demonstrate that INTERN is able to satisfy the application's requirements of all vehicles, while effectively controlling the channel load.",
"title": ""
},
{
"docid": "5dac8ef81c7a6c508c603b3fd6a87581",
"text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.",
"title": ""
},
{
"docid": "31346876446c21b92f088b852c0201b2",
"text": "In this paper, the closed-form design method of an Nway dual-band Wilkinson hybrid power divider is proposed. This symmetric structure including N groups of two sections of transmission lines and two isolated resistors is described which can split a signal into N equiphase equiamplitude parts at two arbitrary frequencies (dual-band) simultaneously, where N can be odd or even. Based on the rigorous evenand odd-mode analysis, the closed-form design equations are derived. For verification, various numerical examples are designed, calculated and compared while two practical examples including two ways and three ways dual-band microstrip power dividers are fabricated and measured. It is very interesting that this generalized power divider with analytical design equations can be designed for wideband applications when the frequency-ratio is relatively small. In addition, it is found that the conventional N-way hybrid Wilkinson power divider for single-band applications is a special case (the frequency-ratio equals to 3) of this generalized power divider.",
"title": ""
},
{
"docid": "ca1aeb2730eb11844d0dde46cf15de4e",
"text": "Knowledge of the bio-impedance and its equivalent circuit model at the electrode-electrolyte/tissue interface is important in the application of functional electrical stimulation. Impedance can be used as a merit to evaluate the proximity between electrodes and targeted tissues. Understanding the equivalent circuit parameters of the electrode can further be leveraged to set a safe boundary for stimulus parameters in order not to exceed the water window of electrodes. In this paper, we present an impedance characterization technique and implement a proof-of-concept system using an implantable neural stimulator and an off-the-shelf microcontroller. The proposed technique yields the parameters of the equivalent circuit of an electrode through large signal analysis by injecting a single low-intensity biphasic current stimulus with deliberately inserted inter-pulse delay and by acquiring the transient electrode voltage at three well-specified timings. Using low-intensity stimulus allows the derivation of electrode double layer capacitance since capacitive charge-injection dominates when electrode overpotential is small. Insertion of the inter-pulse delay creates a controlled discharge time to estimate the Faradic resistance. The proposed method has been validated by measuring the impedance of a) an emulated Randles cells made of discrete circuit components and b) a custom-made platinum electrode array in-vitro, and comparing estimated parameters with the results derived from an impedance analyzer. The proposed technique can be integrated into implantable or commercial neural stimulator system at low extra power consumption, low extra-hardware cost, and light computation.",
"title": ""
},
{
"docid": "135f4008d9c7edc3d7ab8c7f9eb0c85e",
"text": "Organizations deploy gamification in CSCW systems to enhance motivation and behavioral outcomes of users. However, gamification approaches often cause competition between users, which might be inappropriate for working environments that seek cooperation. Drawing on the social interdependence theory, this paper provides a classification for gamification features and insights about the design of cooperative gamification. Using the example of an innova-tion community of a German engineering company, we present the design of a cooperative gamification approach and results from a first experimental evaluation. The findings indicate that the developed gamification approach has positive effects on perceived enjoyment and the intention towards knowledge sharing in the considered innovation community. Besides our conceptual contribu-tion, our findings suggest that cooperative gamification may be beneficial for cooperative working environments and represents a promising field for future research.",
"title": ""
},
{
"docid": "3ff330ab15962b09584e1636de7503ea",
"text": "By diverting funds away from legitimate partners (a.k.a publishers), click fraud represents a serious drain on advertising budgets and can seriously harm the viability of the internet advertising market. As such, fraud detection algorithms which can identify fraudulent behavior based on user click patterns are extremely valuable. Based on the BuzzCity dataset, we propose a novel approach for click fraud detection which is based on a set of new features derived from existing attributes. The proposed model is evaluated in terms of the resulting precision, recall and the area under the ROC curve. A final ensemble model based on 6 different learning algorithms proved to be stable with respect to all 3 performance indicators. Our final model shows improved results on training, validation and test datasets, thus demonstrating its generalizability to different datasets.",
"title": ""
},
{
"docid": "76156cea2ef1d49179d35fd8f333b011",
"text": "Climate change, pollution, and energy insecurity are among the greatest problems of our time. Addressing them requires major changes in our energy infrastructure. Here, we analyze the feasibility of providing worldwide energy for all purposes (electric power, transportation, heating/cooling, etc.) from wind, water, and sunlight (WWS). In Part I, we discuss WWS energy system characteristics, current and future energy demand, availability of WWS resources, numbers of WWS devices, and area and material requirements. In Part II, we address variability, economics, and policy of WWS energy. We estimate that !3,800,000 5 MW wind turbines, !49,000 300 MW concentrated solar plants, !40,000 300 MW solar PV power plants, !1.7 billion 3 kW rooftop PV systems, !5350 100 MWgeothermal power plants, !270 new 1300 MW hydroelectric power plants, !720,000 0.75 MWwave devices, and !490,000 1 MW tidal turbines can power a 2030 WWS world that uses electricity and electrolytic hydrogen for all purposes. Such a WWS infrastructure reduces world power demand by 30% and requires only !0.41% and !0.59% more of the world’s land for footprint and spacing, respectively. We suggest producing all new energy withWWSby 2030 and replacing the pre-existing energy by 2050. Barriers to the plan are primarily social and political, not technological or economic. The energy cost in a WWS world should be similar to",
"title": ""
},
{
"docid": "1527601285eb1b2ef2de040154e3d4fb",
"text": "This paper exploits the context of natural dynamic scenes for human action recognition in video. Human actions are frequently constrained by the purpose and the physical properties of scenes and demonstrate high correlation with particular scene classes. For example, eating often happens in a kitchen while running is more common outdoors. The contribution of this paper is three-fold: (a) we automatically discover relevant scene classes and their correlation with human actions, (b) we show how to learn selected scene classes from video without manual supervision and (c) we develop a joint framework for action and scene recognition and demonstrate improved recognition of both in natural video. We use movie scripts as a means of automatic supervision for training. For selected action classes we identify correlated scene classes in text and then retrieve video samples of actions and scenes for training using script-to-video alignment. Our visual models for scenes and actions are formulated within the bag-of-features framework and are combined in a joint scene-action SVM-based classifier. We report experimental results and validate the method on a new large dataset with twelve action classes and ten scene classes acquired from 69 movies.",
"title": ""
},
{
"docid": "5116ac47f91a798b9ddb6bc3da737c70",
"text": "Mobile brokerage services represent an emerging application of mobile commerce in the brokerage industry. Compared with telephone-based trading services and online brokerage services, they have advantages such as ubiquity, convenience, and privacy. However, the number of investors using mobile brokerage services to conduct brokerage transactions is far smaller than those using other trading methods. A plausible reason for this is that investors lack initial trust in mobile brokerage services, which affects their acceptance of them. This research examines trust transfer as a means of establishing initial trust in mobile brokerage services. We analyze how an investor’s trust in the online brokerage services of a brokerage firm affects her cognitive beliefs about the mobile brokerage services of the firm and what other key factors influence the formation of initial trust in mobile brokerage services. We develop and empirically test a theoretical model of trust transfer from the online to the mobile channels. Our results indicate that trust in online brokerage services not only has a direct effect on initial trust but also has an indirect effect through other variables. This study provides useful suggestions and implications for academics and practitioners. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b761b12bf2f7d9652fdfd7e7cd4f3ef3",
"text": "Knowledge graphs represent concepts (e.g., people, places, events) and their semantic relationships. As a data structure, they underpin a digital information system, support users in resource discovery and retrieval, and are useful for navigation and visualization purposes. Within the libaries and humanities domain, knowledge graphs are typically rooted in knowledge organization systems, which have a century-old tradition and have undergone their digital transformation with the advent of the Web and Linked Data. Being exposed to the Web, metadata and concept definitions are now forming an interconnected and decentralized global knowledge network that can be curated and enriched by community-driven editorial processes. In the future, knowledge graphs could be vehicles for formalizing and connecting findings and insights derived from the analysis of possibly large-scale corpora in the libraries and digital humanities domain.",
"title": ""
},
{
"docid": "763b8982d13b0637a17347b2c557f1f8",
"text": "This paper describes an application of Case-Based Reasonin g to the problem of reducing the number of final-line fraud investigation s i the credit approval process. The performance of a suite of algorithms whi ch are applied in combination to determine a diagnosis from a set of retriev ed cases is reported. An adaptive diagnosis algorithm combining several neighbourhoodbased and probabilistic algorithms was found to have the bes t performance, and these results indicate that an adaptive solution can pro vide fraud filtering and case ordering functions for reducing the number of fin al-li e fraud investigations necessary.",
"title": ""
},
{
"docid": "5ca36b7877ebd3d05e48d3230f2dceb0",
"text": "BACKGROUND\nThe frontal branch has a defined course along the Pitanguy line from tragus to lateral brow, although its depth along this line is controversial. The high-superficial musculoaponeurotic system (SMAS) face-lift technique divides the SMAS above the arch, which conflicts with previous descriptions of the frontal nerve depth. This anatomical study defines the depth and fascial boundaries of the frontal branch of the facial nerve over the zygomatic arch.\n\n\nMETHODS\nEight fresh cadaver heads were included in the study, with bilateral facial nerves studied (n = 16). The proximal frontal branches were isolated and then sectioned in full-thickness tissue blocks over a 5-cm distance over the zygomatic arch. The tissue blocks were evaluated histologically for the depth and fascial planes surrounding the frontal nerve. A dissection video accompanies this article.\n\n\nRESULTS\nThe frontal branch of the facial nerve was identified in each tissue section and its fascial boundaries were easily identified using epidermis and periosteum as reference points. The frontal branch coursed under a separate fascial plane, the parotid-temporal fascia, which was deep to the SMAS as it coursed to the zygomatic arch and remained within this deep fascia over the arch. The frontal branch was intact and protected by the parotid-temporal fascia after a high-SMAS face lift.\n\n\nCONCLUSIONS\nThe frontal branch of the facial nerve is protected by a deep layer of fascia, termed the parotid-temporal fascia, which is separate from the SMAS as it travels over the zygomatic arch. Division of the SMAS above the arch in a high-SMAS face lift is safe using the technique described in this study.",
"title": ""
},
{
"docid": "95babe8b0bd1674ece34cb311db37835",
"text": "We aim at estimating the fundamental matrix in two views from five correspondences of rotation invariant features obtained by e.g. the SIFT detector. The proposed minimal solver1 first estimates a homography from three correspondences assuming that they are co-planar and exploiting their rotational components. Then the fundamental matrix is obtained from the homography and two additional point pairs in general position. The proposed approach, combined with robust estimators like Graph-Cut RANSAC, is superior to other state-of-the-art algorithms both in terms of accuracy and number of iterations required. This is validated on synthesized data and 561 real image pairs. Moreover, the tests show that requiring three points on a plane is not too restrictive in urban environment and locally optimized robust estimators lead to accurate estimates even if the points are not entirely co-planar. As a potential application, we show that using the proposed method makes two-view multi-motion estimation more accurate.",
"title": ""
},
{
"docid": "f519e878b3aae2f0024978489db77425",
"text": "In this paper, we propose a new halftoning scheme that preserves the structure and tone similarities of images while maintaining the simplicity of Floyd-Steinberg error diffusion. Our algorithm is based on the Floyd-Steinberg error diffusion algorithm, but the threshold modulation part is modified to improve the over-blurring issue of the Floyd-Steinberg error diffusion algorithm. By adding some structural information on images obtained using the Laplacian operator to the quantizer thresholds, the structural details in the textured region can be preserved. The visual artifacts of the original error diffusion that is usually visible in the uniform region is greatly reduced by adding noise to the thresholds. This is especially true for the low contrast region because most existing error diffusion algorithms cannot preserve structural details but our algorithm preserves them clearly using threshold modulation. Our algorithm has been evaluated using various types of images including some with the low contrast region and assessed numerically using the MSSIM measure with other existing state-of-art halftoning algorithms. The results show that our method performs better than existing approaches both in the textured region and in the uniform region with the faster computation speed.",
"title": ""
},
{
"docid": "5f6b248776b3b7ad7a840ac5224587be",
"text": "We present in this paper a superpixel segmentation algorithm called Linear Spectral Clustering (LSC), which produces compact and uniform superpixels with low computational costs. Basically, a normalized cuts formulation of the superpixel segmentation is adopted based on a similarity metric that measures the color similarity and space proximity between image pixels. However, instead of using the traditional eigen-based algorithm, we approximate the similarity metric using a kernel function leading to an explicitly mapping of pixel values and coordinates into a high dimensional feature space. We revisit the conclusion that by appropriately weighting each point in this feature space, the objective functions of weighted K-means and normalized cuts share the same optimum point. As such, it is possible to optimize the cost function of normalized cuts by iteratively applying simple K-means clustering in the proposed feature space. LSC is of linear computational complexity and high memory efficiency and is able to preserve global properties of images. Experimental results show that LSC performs equally well or better than state of the art superpixel segmentation algorithms in terms of several commonly used evaluation metrics in image segmentation.",
"title": ""
}
] | scidocsrr |
1cbb8aac17cdcd4ff4ffb8a537dfbe54 | Multilevel Inverter For Grid-Connected PV System Employing Digital PI Controller | [
{
"docid": "913709f4fe05ba2783c3176ed00015fe",
"text": "A generalization of the PWM (pulse width modulation) subharmonic method for controlling single-phase or three-phase multilevel voltage source inverters (VSIs) is considered. Three multilevel PWM techniques for VSI inverters are presented. An analytical expression of the spectral components of the output waveforms covering all the operating conditions is derived. The analysis is based on an extension of Bennet's method. The improvements in harmonic spectrum are pointed out, and several examples are presented which prove the validity of the multilevel modulation. Improvements in the harmonic contents were achieved due to the increased number of levels.<<ETX>>",
"title": ""
}
] | [
{
"docid": "3e4a2d4564e9904b3d3b0457860da5cf",
"text": "Model-based, torque-level control can offer precision and speed advantages over velocity-level or position-level robot control. However, the dynamic parameters of the robot must be identified accurately. Several steps are involved in dynamic parameter identification, including modeling the system dynamics, joint position/torque data acquisition and filtering, experimental design, dynamic parameters estimation and validation. In this paper, we propose a novel, computationally efficient and intuitive optimality criterion to design the excitation trajectory for the robot to follow. Experiments are carried out for a 6 degree of freedom (DOF) Staubli TX-90 robot. We validate the dynamics parameters using torque prediction accuracy and compare to existing methods. The RMS errors of the prediction were small, and the computation time for the new, optimal objective function is an order of magnitude less than for existing approaches. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0f122797e9102c6bab57e64176ee5e84",
"text": "We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.",
"title": ""
},
{
"docid": "1afe9ff72d69e09c24a11187ea7dca2d",
"text": "In the Intelligent Robotics Laboratory (IRL) at Vanderbilt University we seek to develop service robots with a high level of social intelligence and interactivity. In order to achieve this goal, we have identified two main issues for research. The first issue is how to achieve a high level of interaction between the human and the robot. This has lead to the formulation of our philosophy of Human Directed Local Autonomy (HuDL), a guiding principle for research, design, and implementation of service robots. The motivation for integrating humans into a service robot system is to take advantage of human intelligence and skill. Human intelligence can be used to interpret robot sensor data, eliminating computationally expensive and possibly error-prone automated analyses. Human skill is a valuable resource for trajectory and path planning as well as for simplifying the search process. In this paper we present our plans for integrating humans into a service robot system. We present our paradigm for human/robot interaction, HuDL. The second issue is the general problem of system integration, with a specific focus on integrating humans into the service robotic system. This work has lead to the development of the Intelligent Machine Architecture (IMA), a novel software architecture that has been specifically designed to simplify the integration of the many diverse algorithms, sensors, and actuators necessary for socially intelligent service robots. Our testbed system is described, and some example applications of HuDL for aids to the physically disabled are given. An evaluation of the effectiveness of the IMA is also presented.",
"title": ""
},
{
"docid": "d0cf952865b72f25d9b8b049f717d976",
"text": "In this paper, we consider the problem of estimating the relative expertise score of users in community question and answering services (CQA). Previous approaches typically only utilize the explicit question answering relationship between askers and an-swerers and apply link analysis to address this problem. The im-plicit pairwise comparison between two users that is implied in the best answer selection is ignored. Given a question and answering thread, it's likely that the expertise score of the best answerer is higher than the asker's and all other non-best answerers'. The goal of this paper is to explore such pairwise comparisons inferred from best answer selections to estimate the relative expertise scores of users. Formally, we treat each pairwise comparison between two users as a two-player competition with one winner and one loser. Two competition models are proposed to estimate user expertise from pairwise comparisons. Using the NTCIR-8 CQA task data with 3 million questions and introducing answer quality prediction based evaluation metrics, the experimental results show that the pairwise comparison based competition model significantly outperforms link analysis based approaches (PageRank and HITS) and pointwise approaches (number of best answers and best answer ratio) for estimating the expertise of active users. Furthermore, it's shown that pairwise comparison based competi-tion models have better discriminative power than other methods. It's also found that answer quality (best answer) is an important factor to estimate user expertise.",
"title": ""
},
{
"docid": "b6f4bd15f7407b56477eb2cfc4c72801",
"text": "In this study, we present several image segmentation techniques for various image scales and modalities. We consider cellular-, organ-, and whole organism-levels of biological structures in cardiovascular applications. Several automatic segmentation techniques are presented and discussed in this work. The overall pipeline for reconstruction of biological structures consists of the following steps: image pre-processing, feature detection, initial mask generation, mask processing, and segmentation post-processing. Several examples of image segmentation are presented, including patient-specific abdominal tissues segmentation, vascular network identification and myocyte lipid droplet micro-structure reconstruction.",
"title": ""
},
{
"docid": "a1b24627f8ba518fa9285596cc931e32",
"text": "[3] Rakesh Agrawal and Arun Swami. A one-pass space-efficient algorithm for finding quantiles. A one-pass algorithm for accurately estimating quantiles for disk-resident data. [8] Jürgen Beringer and Eyke Hüllermeier. An efficient algorithm for instance-based learning on data streams.",
"title": ""
},
{
"docid": "0ce7465e40b3b13e5c316fb420a766d9",
"text": "We have been developing ldquoSmart Suitrdquo as a soft and light-weight wearable power assist system. A prototype for preventing low-back injury in agricultural works and its semi-active assist mechanism have been developed in the previous study. The previous prototype succeeded to reduce about 14% of average muscle fatigues of body trunk in waist extension/flexion motion. In this paper, we describe a prototype of smart suit for supporting waist and knee joint, and its control method for preventing the displacement of the adjustable assist force mechanism in order to keep the assist efficiency.",
"title": ""
},
{
"docid": "15f099c342b7f9beae9c0b193f49f7f4",
"text": "We study rare events data, binary dependent variables with dozens to thousands of times fewer ones (events, such as wars, vetoes, cases of political activism, or epidemiological infections) than zeros (“nonevents”). In many literatures, these variables have proven difficult to explain and predict, a problem that seems to have at least two sources. First, popular statistical procedures, such as logistic regression, can sharply underestimate the probability of rare events. We recommend corrections that outperform existing methods and change the estimates of absolute and relative risks by as much as some estimated effects reported in the literature. Second, commonly used data collection strategies are grossly inefficient for rare events data. The fear of collecting data with too few events has led to data collections with huge numbers of observations but relatively few, and poorly measured, explanatory variables, such as in international conflict data with more than a quarter-million dyads, only a few of which are at war. As it turns out, more efficient sampling designs exist for making valid inferences, such as sampling all available events (e.g., wars) and a tiny fraction of nonevents (peace). This enables scholars to save as much as 99% of their (nonfixed) data collection costs or to collect much more meaningful explanatory variables. We provide methods that link these two results, enabling both types of corrections to work simultaneously, and software that implements the methods developed.",
"title": ""
},
{
"docid": "2e1cb87045b5356a965aa52e9e745392",
"text": "Community detection is a common problem in graph data analytics that consists of finding groups of densely connected nodes with few connections to nodes outside of the group. In particular, identifying communities in large-scale networks is an important task in many scientific domains. In this review, we evaluated eight state-of-the-art and five traditional algorithms for overlapping and disjoint community detection on large-scale real-world networks with known ground-truth communities. These 13 algorithms were empirically compared using goodness metrics that measure the structural properties of the identified communities, as well as performance metrics that evaluate these communities against the ground-truth. Our results show that these two types of metrics are not equivalent. That is, an algorithm may perform well in terms of goodness metrics, but poorly in terms of performance metrics, or vice versa. © 2014 The Authors. WIREs Computational Statistics published by Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "b1b2a83d67456c0f0bf54092cbb06e65",
"text": "The transmission of voice communications as datagram packets over IP networks, commonly known as voice-over-IP (VoIP) telephony, is rapidly gaining wide acceptance. With private phone conversations being conducted on insecure public networks, security of VoIP communications is increasingly important. We present a structured security analysis of the VoIP protocol stack, which consists of signaling (SIP), session description (SDP), key establishment (SDES, MIKEY, and ZRTP) and secure media transport (SRTP) protocols. Using a combination of manual and tool-supported formal analysis, we uncover several design flaws and attacks, most of which are caused by subtle inconsistencies between the assumptions that protocols at different layers of the VoIP stack make about each other. The most serious attack is a replay attack on SDES, which causes SRTP to repeat the keystream used for media encryption, thus completely breaking transport-layer security. We also demonstrate a man-in-the-middle attack on ZRTP, which allows the attacker to convince the communicating parties that they have lost their shared secret. If they are using VoIP devices without displays and thus cannot execute the \"human authentication\" procedure, they are forced to communicate insecurely, or not communicate at all, i.e., this becomes a denial of service attack. Finally, we show that the key derivation process used in MIKEY cannot be used to prove security of the derived key in the standard cryptographic model for secure key exchange.",
"title": ""
},
{
"docid": "70366939a4386fd4a712efc704c8e248",
"text": "k-Means is a versatile clustering algorithm widely used in practice. To cluster large data sets, state-of-the-art implementations use GPUs to shorten the data to knowledge time. These implementations commonly assign points on a GPU and update centroids on a CPU. We identify two main shortcomings of this approach. First, it requires expensive data exchange between processors when switching between the two processing steps point assignment and centroid update. Second, even when processing both steps of k-means on the same processor, points still need to be read two times within an iteration, leading to inefficient use of memory bandwidth. In this paper, we present a novel approach for centroid update that allows us to efficiently process both phases of k-means on GPUs. We fuse point assignment and centroid update to execute one iteration with a single pass over the points. Our evaluation shows that our k-means approach scales to very large data sets. Overall, we achieve up to 20 × higher throughput compared to the state-of-the-art approach.",
"title": ""
},
{
"docid": "f9cba94dee194cb38923a3ba47b0a2b6",
"text": "We investigate the value of feature engineering and neural network models for predicting successful writing. Similar to previous work, we treat this as a binary classification task and explore new strategies to automatically learn representations from book contents. We evaluate our feature set on two different corpora created from Project Gutenberg books. The first presents a novel approach for generating the gold standard labels for the task and the other is based on prior research. Using a combination of hand-crafted and recurrent neural network learned representations in a dual learning setting, we obtain the best performance of 73.50% weighted F1-score.",
"title": ""
},
{
"docid": "82857fedec78e8317498e3c66268d965",
"text": "In this paper, we provide an improved evolutionary algorithm for bilevel optimization. It is an extension of a recently proposed Bilevel Evolutionary Algorithm based on Quadratic Approximations (BLEAQ). Bilevel optimization problems are known to be difficult and computationally demanding. The recently proposed BLEAQ approach has been able to bring down the computational expense significantly as compared to the contemporary approaches. The strategy proposed in this paper further improves the algorithm by incorporating archiving and local search. Archiving is used to store the feasible members produced during the course of the algorithm that provide a larger pool of members for better quadratic approximations of optimal lower level solutions. Frequent local searches at upper level supported by the quadratic approximations help in faster convergence of the algorithm. The improved results have been demonstrated on two different sets of test problems, and comparison results against the contemporary approaches are also provided.",
"title": ""
},
{
"docid": "5123d52a50b75e37e90ed7224d531a18",
"text": "Tarlov or perineural cysts are nerve root cysts found most commonly at the sacral spine level arising between covering layers of the perineurium and the endoneurium near the dorsal root ganglion. The cysts are relatively rare and most of them are asymptomatic. Some Tarlov cysts can exert pressure on nerve elements resulting in pain, radiculopathy and even multiple radiculopathy of cauda equina. There is no consensus on the appropriate therapeutic options of Tarlov cysts. The authors present a case of two sacral cysts diagnosed with magnetic resonance imaging. The initial symptoms were low back pain and sciatica and progressed to cauda equina syndrome. Surgical treatment was performed by sacral laminectomy and wide cyst fenestration. The neurological deficits were recovered and had not recurred after a follow-up period of nine months. The literature was reviewed and discussed. This is the first reported case in Thailand.",
"title": ""
},
{
"docid": "39710768ed8ec899e412cccae7e7d262",
"text": "Traditional classification algorithms assume that training and test data come from similar distributions. This assumption is violated in adversarial settings, where malicious actors modify instances to evade detection. A number of custom methods have been developed for both adversarial evasion attacks and robust learning. We propose the first systematic and general-purpose retraining framework which can: a) boost robustness of an arbitrary learning algorithm, in the face of b) a broader class of adversarial models than any prior methods. We show that, under natural conditions, the retraining framework minimizes an upper bound on optimal adversarial risk, and show how to extend this result to account for approximations of evasion attacks. Extensive experimental evaluation demonstrates that our retraining methods are nearly indistinguishable from state-of-the-art algorithms for optimizing adversarial risk, but are more general and far more scalable. The experiments also confirm that without retraining, our adversarial framework dramatically reduces the effectiveness of learning. In contrast, retraining significantly boosts robustness to evasion attacks without significantly compromising overall accuracy.",
"title": ""
},
{
"docid": "8474b5b3ed5838e1d038e73579168f40",
"text": "For the first time to the best of our knowledge, this paper provides an overview of millimeter-wave (mmWave) 5G antennas for cellular handsets. Practical design considerations and solutions related to the integration of mmWave phased-array antennas with beam switching capabilities are investigated in detail. To experimentally examine the proposed methodologies, two types of mesh-grid phased-array antennas featuring reconfigurable horizontal and vertical polarizations are designed, fabricated, and measured at the 60 GHz spectrum. Afterward the antennas are integrated with the rest of the 60 GHz RF and digital architecture to create integrated mmWave antenna modules and implemented within fully operating cellular handsets under plausible user scenarios. The effectiveness, current limitations, and required future research areas regarding the presented mmWave 5G antenna design technologies are studied using mmWave 5G system benchmarks.",
"title": ""
},
{
"docid": "a0c3d1bae7b670884afd3e7119fcd095",
"text": "Twitter is a widely-used social networking service which enables its users to post text-based messages, so-called tweets. POI tags on tweets can show more human-readable high-level information about a place rather than just a pair of coordinates. In this paper, we attempt to predict the POI tag of a tweet based on its textual content and time of posting. Potential applications include accurate positioning when GPS devices fail and disambiguating places located near each other. We consider this task as a ranking problem, i.e., we try to rank a set of candidate POIs according to a tweet by using language and time models. To tackle the sparsity of tweets tagged with POIs, we use web pages retrieved by search engines as an additional source of evidence. From our experiments, we find that users indeed leak some information about their accurate locations in their tweets.",
"title": ""
},
{
"docid": "3cdc2052eb37bdbb1f7d38ec90a095c4",
"text": "We present a simple and effective blind image deblurring method based on the dark channel prior. Our work is inspired by the interesting observation that the dark channel of blurred images is less sparse. While most image patches in the clean image contain some dark pixels, these pixels are not dark when averaged with neighboring highintensity pixels during the blur process. This change in the sparsity of the dark channel is an inherent property of the blur process, which we both prove mathematically and validate using training data. Therefore, enforcing the sparsity of the dark channel helps blind deblurring on various scenarios, including natural, face, text, and low-illumination images. However, sparsity of the dark channel introduces a non-convex non-linear optimization problem. We introduce a linear approximation of the min operator to compute the dark channel. Our look-up-table-based method converges fast in practice and can be directly extended to non-uniform deblurring. Extensive experiments show that our method achieves state-of-the-art results on deblurring natural images and compares favorably methods that are well-engineered for specific scenarios.",
"title": ""
},
{
"docid": "35dc1eed6439bae9c74605e75bf8b3a2",
"text": "We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.",
"title": ""
},
{
"docid": "7d74b896764837904019a0abff967065",
"text": "Asymptotic behavior of a recurrent neural network changes qualitatively at certain points in the parameter space, which are known as \\bifurcation points\". At bifurcation points, the output of a network can change discontinuously with the change of parameters and therefore convergence of gradient descent algorithms is not guaranteed. Furthermore, learning equations used for error gradient estimation can be unstable. However, some kinds of bifurcations are inevitable in training a recurrent network as an automaton or an oscillator. Some of the factors underlying successful training of recurrent networks are investigated, such as choice of initial connections, choice of input patterns, teacher forcing, and truncated learning equations.",
"title": ""
}
] | scidocsrr |
9901f05894b9deb977fd2f8ab00096ad | Analysis of the antecedents of knowledge sharing and its implication for SMEs internationalization | [
{
"docid": "d5464818af641aae509549f586c5526d",
"text": "The learning and knowledge that we have, is, at the most, but little compared with that of which we are ignorant. Plato Knowledge management (KM) is a vital and complex topic of current interest to so many in business, government and the community in general, that there is an urgent need to expand the role of empirical research to inform knowledge management practice. However, one of the most striking aspects of knowledge management is the diversity of the field and the lack of universally accepted definitions of the term itself and its derivatives, knowledge and management. As a consequence of the multidisciplinary nature of KM, the terms inevitably hold a difference in meaning and emphasis for different people. The initial chapter of this book addresses the challenges brought about by these differences. This chapter begins with a critical assessment of some diverse frameworks for knowledge management that have been appearing in the international academic literature of many disciplines for some time. Then follows a description of ways that these have led to some holistic and integrated frameworks currently being developed by KM researchers in Australia.",
"title": ""
},
{
"docid": "5e04372f08336da5b8ab4d41d69d3533",
"text": "Purpose – This research aims at investigating the role of certain factors in organizational culture in the success of knowledge sharing. Such factors as interpersonal trust, communication between staff, information systems, rewards and organization structure play an important role in defining the relationships between staff and in turn, providing possibilities to break obstacles to knowledge sharing. This research is intended to contribute in helping businesses understand the essential role of organizational culture in nourishing knowledge and spreading it in order to become leaders in utilizing their know-how and enjoying prosperity thereafter. Design/methodology/approach – The conclusions of this study are based on interpreting the results of a survey and a number of interviews with staff from various organizations in Bahrain from the public and private sectors. Findings – The research findings indicate that trust, communication, information systems, rewards and organization structure are positively related to knowledge sharing in organizations. Research limitations/implications – The authors believe that further research is required to address governmental sector institutions, where organizational politics dominate a role in hoarding knowledge, through such methods as case studies and observation. Originality/value – Previous research indicated that the Bahraini society is influenced by traditions of household, tribe, and especially religion of the Arab and Islamic world. These factors define people’s beliefs and behaviours, and thus exercise strong influence in the performance of business organizations. This study is motivated by the desire to explore the role of the national organizational culture on knowledge sharing, which may be different from previous studies conducted abroad.",
"title": ""
}
] | [
{
"docid": "72e1c5690f20c47a63ebbb1dd3fc7f2c",
"text": "In edge-cloud computing, a set of edge servers are deployed near the mobile devices such that these devices can offload jobs to the servers with low latency. One fundamental and critical problem in edge-cloud systems is how to dispatch and schedule the jobs so that the job response time (defined as the interval between the release of a job and the arrival of the computation result at its device) is minimized. In this paper, we propose a general model for this problem, where the jobs are generated in arbitrary order and times at the mobile devices and offloaded to servers with both upload and download delays. Our goal is to minimize the total weighted response time over all the jobs. The weight is set based on how latency sensitive the job is. We derive the first online job dispatching and scheduling algorithm in edge-clouds, called OnDisc, which is scalable in the speed augmentation model; that is, OnDisc is (1 + ε)-speed O(1/ε)-competitive for any constant ε ∊ (0,1). Moreover, OnDisc can be easily implemented in distributed systems. Extensive simulations on a real-world data-trace from Google show that OnDisc can reduce the total weighted response time dramatically compared with heuristic algorithms.",
"title": ""
},
{
"docid": "affc663476dc4d5299de5f89f67e5f5a",
"text": "Many machine learning algorithms, such as K Nearest Neighbor (KNN), heavily rely on the distance metric for the input data patterns. Distance Metric learning is to learn a distance metric for the input space of data from a given collection of pair of similar/dissimilar points that preserves the distance relation among the training data. In recent years, many studies have demonstrated, both empirically and theoretically, that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. This paper surveys the field of distance metric learning from a principle perspective, and includes a broad selection of recent work. In particular, distance metric learning is reviewed under different learning conditions: supervised learning versus unsupervised learning, learning in a global sense versus in a local sense; and the distance matrix based on linear kernel versus nonlinear kernel. In addition, this paper discusses a number of techniques that is central to distance metric learning, including convex programming, positive semi-definite programming, kernel learning, dimension reduction, K Nearest Neighbor, large margin classification, and graph-based approaches.",
"title": ""
},
{
"docid": "20a90ed3aa2b428b19e85aceddadce90",
"text": "Deep learning has been a groundbreaking technology in various fields as well as in communications systems. In spite of the notable advancements of deep neural network (DNN) based technologies in recent years, the high computational complexity has been a major obstacle to apply DNN in practical communications systems which require real-time operation. In this sense, challenges regarding practical implementation must be addressed before the proliferation of DNN-based intelligent communications becomes a reality. To the best of the authors’ knowledge, for the first time, this article presents an efficient learning architecture and design strategies including link level verification through digital circuit implementations using hardware description language (HDL) to mitigate this challenge and to deduce feasibility and potential of DNN for communications systems. In particular, DNN is applied for an encoder and a decoder to enable flexible adaptation with respect to the system environments without needing any domain specific information. Extensive investigations and interdisciplinary design considerations including the DNN-based autoencoder structure, learning framework, and low-complexity digital circuit implementations for real-time operation are taken into account by the authors which ascertains the use of DNN-based communications in practice.",
"title": ""
},
{
"docid": "6e848928859248e0597124cee0560e43",
"text": "The scaling of microchip technologies has enabled large scale systems-on-chip (SoC). Network-on-chip (NoC) research addresses global communication in SoC, involving (i) a move from computation-centric to communication-centric design and (ii) the implementation of scalable communication structures. This survey presents a perspective on existing NoC research. We define the following abstractions: system, network adapter, network, and link to explain and structure the fundamental concepts. First, research relating to the actual network design is reviewed. Then system level design and modeling are discussed. We also evaluate performance analysis techniques. The research shows that NoC constitutes a unification of current trends of intrachip communication rather than an explicit new alternative.",
"title": ""
},
{
"docid": "be43b90cce9638b0af1c3143b6d65221",
"text": "Reasoning on provenance information and property propagation is of significant importance in e-science since it helps scientists manage derived metadata in order to understand the source of an object, reproduce results of processes and facilitate quality control of results and processes. In this paper we introduce a simple, yet powerful reasoning mechanism based on property propagation along the transitive part-of and derivation chains, in order to trace the provenance of an object and to carry useful inferences. We apply our reasoning in semantic repositories using the CIDOC-CRM conceptual schema and its extension CRMdig, which has been develop for representing the digital and empirical provenance of digi-",
"title": ""
},
{
"docid": "ea544ffc7eeee772388541d0d01812a7",
"text": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group — an international effort to develop new standardized response criteria for clinical trials in brain tumours — has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented.",
"title": ""
},
{
"docid": "3ba65ec924fff2d246197bb2302fb86e",
"text": "Guidelines for evaluating the levels of evidence based on quantitative research are well established. However, the same cannot be said for the evaluation of qualitative research. This article discusses a process members of an evidence-based clinical practice guideline development team with the Association of Women's Health, Obstetric and Neonatal Nurses used to create a scoring system to determine the strength of qualitative research evidence. A brief history of evidence-based clinical practice guideline development is provided, followed by discussion of the development of the Nursing Management of the Second Stage of Labor evidence-based clinical practice guideline. The development of the qualitative scoring system is explicated, and implications for nursing are proposed.",
"title": ""
},
{
"docid": "46ff38a51f766cd5849a537cc0632660",
"text": "BACKGROUND\nLinear IgA bullous dermatosis (LABD) is an acquired autoimmune sub-epidermal vesiculobullous disease characterized by continuous linear IgA deposit on the basement membrane zone, as visualized on direct immunofluorescence microscopy. LABD can affect both adults and children. The disease is very uncommon, with a still unknown incidence in the South American population.\n\n\nMATERIALS AND METHODS\nAll confirmed cases of LABD by histological and immunofluorescence in our hospital were studied.\n\n\nRESULTS\nThe confirmed cases were three females and two males, aged from 8 to 87 years. Precipitant events associated with LABD were drug consumption (non-steroid inflammatory agents in two cases) and ulcerative colitis (one case). Most of our patients were treated with dapsone, resulting in remission.\n\n\nDISCUSSION\nOur series confirms the heterogeneous clinical features of this uncommon disease in concordance with a larger series of patients reported in the literature.",
"title": ""
},
{
"docid": "7970ec4bd6e17d70913d88e07a39f82d",
"text": "This thesis deals with Chinese characters (Hanzi): their key characteristics and how they could be used as a kind of knowledge resource in the (Chinese) NLP. Part 1 deals with basic issues. In Chapter 1, the motivation and the reasons for reconsidering the writing system will be presented, and a short introduction to Chinese and its writing system will be given in Chapter 2. Part 2 provides a critical review of the current, ongoing debate about Chinese characters. Chapter 3 outlines some important linguistic insights from the vantage point of indigenous scriptological and Western linguistic traditions, as well as a new theoretical framework in contemporary studies of Chinese characters. The focus of Chapter 4 concerns the search for appropriate mathematical descriptions with regard to the systematic knowledge information hidden in characters. The subject matter of mathematical formalization of the shape structure of Chinese characters is depicted as well. Part 3 illustrates the representation issues. Chapter 5 addresses the design and construction of the HanziNet, an enriched conceptual network of Chinese characters. Topics that are covered in this chapter include the ideas, architecture, methods and ontology design. In Part 4, a case study based on the above mentioned ideas will be launched. Chapter 6 presents an experiment exploring the character-triggered semantic class of Chinese unknown words. Finally, Chapter 7 summarizes the major findings of this thesis. Next, it depicts some potential avenues in the future, and assesses the theoretical implications of these findings for computational linguistic theory.",
"title": ""
},
{
"docid": "09085fc15308a96cd9441bb0e23e6c1a",
"text": "Convolutional neural networks (CNNs) are able to model local stationary structures in natural images in a multi-scale fashion, when learning all model parameters with supervision. While excellent performance was achieved for image classification when large amounts of labeled visual data are available, their success for unsupervised tasks such as image retrieval has been moderate so far.Our paper focuses on this latter setting and explores several methods for learning patch descriptors without supervision with application to matching and instance-level retrieval. To that effect, we propose a new family of patch representations, based on the recently introduced convolutional kernel networks. We show that our descriptor, named Patch-CKN, performs better than SIFT as well as other convolutional networks learned by artificially introducing supervision and is significantly faster to train. To demonstrate its effectiveness, we perform an extensive evaluation on standard benchmarks for patch and image retrieval where we obtain state-of-the-art results. We also introduce a new dataset called RomePatches, which allows to simultaneously study descriptor performance for patch and image retrieval.",
"title": ""
},
{
"docid": "a017ab9f310f9f36f88bf488ac833f05",
"text": "Wireless data communication technology has eliminated wired connections for data transfer to portable devices. Wireless power technology offers the possibility of eliminating the remaining wired connection: the power cord. For ventricular assist devices (VADs), wireless power technology will eliminate the complications and infections caused by the percutaneous wired power connection. Integrating wireless power technology into VADs will enable VAD implants to become a more viable option for heart failure patients (of which there are 80 000 in the United States each year) than heart transplants. Previous transcutaneous energy transfer systems (TETS) have attempted to wirelessly power VADs ; however, TETS-based technologies are limited in range to a few millimeters, do not tolerate angular misalignment, and suffer from poor efficiency. The free-range resonant electrical delivery (FREE-D) wireless power system aims to use magnetically coupled resonators to efficiently transfer power across a distance to a VAD implanted in the human body, and to provide robustness to geometric changes. Multiple resonator configurations are implemented to improve the range and efficiency of wireless power transmission to both a commercially available axial pump and a VentrAssist centrifugal pump [3]. An adaptive frequency tuning method allows for maximum power transfer efficiency for nearly any angular orientation over a range of separation distances. Additionally, laboratory results show the continuous operation of both pumps using the FREE-D system with a wireless power transfer efficiency upwards of 90%.",
"title": ""
},
{
"docid": "819f5df03cebf534a51eb133cd44cb0d",
"text": "Although DBP (di-n-butyl phthalate) is commonly encountered as an artificially-synthesized plasticizer with potential to impair fertility, we confirm that it can also be biosynthesized as microbial secondary metabolites from naturally occurring filamentous fungi strains cultured either in an artificial medium or natural water. Using the excreted crude enzyme from the fungi for catalyzing a variety of substrates, we found that the fungal generation of DBP was largely through shikimic acid pathway, which was assembled by phthalic acid with butyl alcohol through esterification. The DBP production ability of the fungi was primarily influenced by fungal spore density and incubation temperature. This study indicates an important alternative natural waterborne source of DBP in addition to artificial synthesis, which implied fungal contribution must be highlighted for future source control and risk management of DBP.",
"title": ""
},
{
"docid": "225b834e820b616e0ccfed7259499fd6",
"text": "Introduction: Actinic cheilitis (AC) is a lesion potentially malignant that affects the lips after prolonged exposure to solar ultraviolet (UV) radiation. The present study aimed to assess and describe the proliferative cell activity, using silver-stained nucleolar organizer region (AgNOR) quantification proteins, and to investigate the potential associations between AgNORs and the clinical aspects of AC lesions. Materials and methods: Cases diagnosed with AC were selected and reviewed from Center of Histopathological Diagnosis of the Institute of Biological Sciences, Passo Fundo University, Brazil. Clinical data including clinical presentation of the patients affected with AC were collected. The AgNOR techniques were performed in all recovered cases. The different microscopic areas of interest were printed with magnification of *1000, and in each case, 200 epithelial cell nuclei were randomly selected. The mean quantity in each nucleus for NORs was recorded. One-way analysis of variance was used for statistical analysis. Results: A total of 22 cases of AC were diagnosed. The patients were aged between 46 and 75 years (mean age: 55 years). Most of the patients affected were males presenting asymptomatic white plaque lesions in the lower lip. The mean value quantified for AgNORs was 2.4 ± 0.63, ranging between 1.49 and 3.82. No statistically significant difference was observed associating the quantity of AgNORs with the clinical aspects collected from the patients (p > 0.05). Conclusion: The present study reports the lack of association between the proliferative cell activity and the clinical aspects observed in patients affected by AC through the quantification of AgNORs. Clinical significance: Knowing the potential relation between the clinical aspects of AC and the proliferative cell activity quantified by AgNORs could play a significant role toward the early diagnosis of malignant lesions in the clinical practice. Keywords: Actinic cheilitis, Proliferative cell activity, Silver-stained nucleolar organizer regions.",
"title": ""
},
{
"docid": "be41d072e3897506fad111549e7bf862",
"text": "Handing unbalanced data and noise are two important issues in the field of machine learning. This paper proposed a complete framework of fuzzy relevance vector machine by weighting the punishment terms of error in Bayesian inference process of relevance vector machine (RVM). Above problems can be learned within this framework with different kinds of fuzzy membership functions. Experiments on both synthetic data and real world data demonstrate that fuzzy relevance vector machine (FRVM) is effective in dealing with unbalanced data and reducing the effects of noises or outliers. 2008 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "b851cf64be0684f63e63e7317aaada5c",
"text": "With the increasing popularity of cloud-based data services, data owners are highly motivated to store their huge amount of potentially sensitive personal data files on remote servers in encrypted form. Clients later can query over the encrypted database to retrieve files while protecting privacy of both the queries and the database, by allowing some reasonable leakage information. To this end, the notion of searchable symmetric encryption (SSE) was proposed. Meanwhile, recent literature has shown that most dynamic SSE solutions leaking information on updated keywords are vulnerable to devastating file-injection attacks. The only way to thwart these attacks is to design forward-private schemes. In this paper, we investigate new privacy-preserving indexing and query processing protocols which meet a number of desirable properties, including the multi-keyword query processing with conjunction and disjunction logic queries, practically high privacy guarantees with adaptive chosen keyword attack (CKA2) security and forward privacy, the support of dynamic data operations, and so on. Compared with previous schemes, our solutions are highly compact, practical, and flexible. Their performance and security are carefully characterized by rigorous analysis. Experimental evaluations conducted over a large representative data set demonstrate that our solutions can achieve modest search time efficiency, and they are practical for use in large-scale encrypted database systems.",
"title": ""
},
{
"docid": "124729483d5db255b60690e2facbfe45",
"text": "Human social intelligence depends on a diverse array of perceptual, cognitive, and motivational capacities. Some of these capacities depend on neural systems that may have evolved through modification of ancestral systems with non-social or more limited social functions (evolutionary repurposing). Social intelligence, in turn, enables new forms of repurposing within the lifetime of an individual (cultural and instrumental repurposing), which entail innovating over and exploiting pre-existing circuitry to meet problems our brains did not evolve to solve. Considering these repurposing processes can provide insight into the computations that brain regions contribute to social information processing, generate testable predictions that usefully constrain social neuroscience theory, and reveal biologically imposed constraints on cultural inventions and our ability to respond beneficially to contemporary challenges.",
"title": ""
},
{
"docid": "c5e078cb9835db450be894aee477d00c",
"text": "I would like to jump on the blockchain bandwagon. I would like to be able to say that blockchain is the solution to the longstanding problem of secure identity on the Internet. I would like to say that everyone in the world will soon have a digital identity. Put yourself on the blockchain and never again ask yourself, Who am I? - you are your blockchain address.",
"title": ""
},
{
"docid": "762d6e9a8f0061e3a2f1b1c0eeba2802",
"text": "A new prior is proposed for representation learning, which can be combined with other priors in order to help disentangling abstract factors from each other. It is inspired by the phenomenon of consciousness seen as the formation of a low-dimensional combination of a few concepts constituting a conscious thought, i.e., consciousness as awareness at a particular time instant. This provides a powerful constraint on the representation in that such low-dimensional thought vectors can correspond to statements about reality which are true, highly probable, or very useful for taking decisions. The fact that a few elements of the current state can be combined into such a predictive or useful statement is a strong constraint and deviates considerably from the maximum likelihood approaches to modelling data and how states unfold in the future based on an agent's actions. Instead of making predictions in the sensory (e.g. pixel) space, the consciousness prior allows the agent to make predictions in the abstract space, with only a few dimensions of that space being involved in each of these predictions. The consciousness prior also makes it natural to map conscious states to natural language utterances or to express classical AI knowledge in the form of facts and rules, although the conscious states may be richer than what can be expressed easily in the form of a sentence, a fact or a rule.",
"title": ""
},
{
"docid": "57e2adea74edb5eaf5b2af00ab3c625e",
"text": "Although scholars agree that moral emotions are critical for deterring unethical and antisocial behavior, there is disagreement about how 2 prototypical moral emotions--guilt and shame--should be defined, differentiated, and measured. We addressed these issues by developing a new assessment--the Guilt and Shame Proneness scale (GASP)--that measures individual differences in the propensity to experience guilt and shame across a range of personal transgressions. The GASP contains 2 guilt subscales that assess negative behavior-evaluations and repair action tendencies following private transgressions and 2 shame subscales that assess negative self-evaluations (NSEs) and withdrawal action tendencies following publically exposed transgressions. Both guilt subscales were highly correlated with one another and negatively correlated with unethical decision making. Although both shame subscales were associated with relatively poor psychological functioning (e.g., neuroticism, personal distress, low self-esteem), they were only weakly correlated with one another, and their relationships with unethical decision making diverged. Whereas shame-NSE constrained unethical decision making, shame-withdraw did not. Our findings suggest that differentiating the tendency to make NSEs following publically exposed transgressions from the tendency to hide or withdraw from public view is critically important for understanding and measuring dispositional shame proneness. The GASP's ability to distinguish these 2 classes of responses represents an important advantage of the scale over existing assessments. Although further validation research is required, the present studies are promising in that they suggest the GASP has the potential to be an important measurement tool for detecting individuals susceptible to corruption and unethical behavior.",
"title": ""
},
{
"docid": "1d3b2a5906d7db650db042db9ececed1",
"text": "Music consists of precisely patterned sequences of both movement and sound that engage the mind in a multitude of experiences. We move in response to music and we move in order to make music. Because of the intimate coupling between perception and action, music provides a panoramic window through which we can examine the neural organization of complex behaviors that are at the core of human nature. Although the cognitive neuroscience of music is still in its infancy, a considerable behavioral and neuroimaging literature has amassed that pertains to neural mechanisms that underlie musical experience. Here we review neuroimaging studies of explicit sequence learning and temporal production—findings that ultimately lay the groundwork for understanding how more complex musical sequences are represented and produced by the brain. These studies are also brought into an existing framework concerning the interaction of attention and time-keeping mechanisms in perceiving complex patterns of information that are distributed in time, such as those that occur in music.",
"title": ""
}
] | scidocsrr |
2424f6a833428f89922607a490aa2bef | City-scale landmark identification on mobile devices | [
{
"docid": "a7c330c9be1d7673bfff43b0544db4ea",
"text": "The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index.",
"title": ""
}
] | [
{
"docid": "304f4cb3872780dd54ebe53d43c37bc6",
"text": "Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks. Maximum-Likelihood (MLE) models trained with teacher forcing have consistently been reported as weak baselines, where poor performance is attributed to exposure bias; at inference time, the model is fed its own prediction instead of a ground-truth token, which can lead to accumulating errors and poor samples. This line of reasoning has led to an outbreak of adversarial based approaches for NLG, on the account that GANs do not suffer from exposure bias. In this work, we make several surprising observations which contradict common beliefs. First, we revisit the canonical evaluation framework for NLG, and point out fundamental flaws with quality-only evaluation: we show that one can outperform such metrics using a simple, well-known temperature parameter to artificially reduce the entropy of the model’s conditional distributions. Second, we leverage the control over the quality / diversity tradeoff given by this parameter to evaluate models over the whole quality-diversity spectrum, and find MLE models constantly outperform the proposed GAN variants over the whole quality-diversity space. Our results have several implications: 1) The impact of exposure bias on sample quality is less severe than previously thought, 2) temperature tuning provides a better quality / diversity trade off than adversarial training, while being easier to train, easier to cross-validate, and less computationally expensive. 1 Recent Developments in NLG GANs are an instance of generative models based on a competition between a generator network Gθ and a discriminator network Dφ. The generator network Gθ represents a probability distribution pmodel(x). The discriminator Dφ(x) attempts to distinguish whether an input value x is real (came from the training data) or fake (came from the generator). Mathematically, the GAN objective can be formulated as a minimax game min θ max φ Ex∼pdata [logDφ(x)] + Ex∼Gθ [1− logDφ(x)]. Preprint. Work in progress. ar X iv :1 81 1. 02 54 9v 1 [ cs .C L ] 6 N ov 2 01 8 GANs originally were applied on continuous data like images. This is because the training procedure relied on backpropagation through the discriminator into the generator. Discrete (sequential) data require an alternative approach. [Yu et al., 2017] estimate the gradient to the generator via REINFORCE policy gradients [Williams, 1992]. In their formulation, the discriminator evaluates full sequences. Therefore, to provide error attribution earlier for incomplete sequences and to reduce the variance of gradients they perform k Monte-Carlo rollouts until the sentence is completed. [Yu et al., 2017] advertise their model using two tasks which we argue (with hindsight) are flawed. First, they introduce a synthetic evaluation procedure where the underlying data distribution P is known and can be queried. By representing P with an LSTM (referred to as an oracle in the literature) they directly compute the likelihood of samples drawn from a generative model Gθ. The problem is they benchmark models against each other on this likelihood alone, i.e., the diagnostic is completely blind to diversity. For example, a model that always outputs the same highly likely sequence would easily outperform other potentially superior models. For real data, there was no agreed upon metric to evaluate the quality of unconditional NLG at the time. This led the authors to propose a new metric, Corpus-level BLEU, which computes the fraction of n-grams in a sample that appear in a reference corpus. Again, this metric is agnostic to diversity. Generating a single good sentence over and over will gives a perfect BLEU score. 0.3 0.2 0.1 Negative BLEU-5 0.2 0.3 0.4 0.5",
"title": ""
},
{
"docid": "74c48ec7adb966fc3024ed87f6102a1a",
"text": "Quantitative accessibility metrics are widely used in accessibility evaluation, which synthesize a summative value to represent the accessibility level of a website. Many of these metrics are the results of a two-step process. The first step is the inspection with regard to potential barriers while different properties are reported, and the second step aggregates these fine-grained reports with varying weights for checkpoints. Existing studies indicate that finding appropriate weights for different checkpoint types is a challenging issue. Although some metrics derive the checkpoint weights from the WCAG priority levels, previous investigations reveal that the correlation between the WCAG priority levels and the user experience is not significant. Moreover, our website accessibility evaluation results also confirm the mismatches between the ranking of websites using existing metrics and the ranking based on user experience. To overcome this limitation, we propose a novel metric called the Web Accessibility Experience Metric (WAEM) that can better match the accessibility evaluation results with the user experience of people with disabilities by aligning the evaluation metric with the partial user experience order (PUEXO), i.e. pairwise comparisons between different websites. A machine learning model is developed to derive the optimal checkpoint weights from the PUEXO. Experiments on real-world web accessibility evaluation data sets validate the effectiveness of WAEM.",
"title": ""
},
{
"docid": "f6783c1f37bb125fd35f4fbfedfde648",
"text": "This paper presents an attributed graph-based approach to an intricate data mining problem of revealing affiliated, interdependent entities that might be at risk of being tempted into fraudulent transfer pricing. We formalize the notions of controlled transactions and interdependent parties in terms of graph theory. We investigate the use of clustering and rule induction techniques to identify candidate groups (hot spots) of suspect entities. Further, we find entities that require special attention with respect to transfer pricing audits using network analysis and visualization techniques in IBM i2 Analyst's Notebook.",
"title": ""
},
{
"docid": "3d862e488798629d633f78260a569468",
"text": "Training workshops and professional meetings are important tools for capacity building and professional development. These social events provide professionals and educators a platform where they can discuss and exchange constructive ideas, and receive feedback. In particular, competition-based training workshops where participants compete on solving similar and common challenging problems are effective tools for stimulating students’ learning and aspirations. This paper reports the results of a two-day training workshop where memory and disk forensics were taught using a competition-based security educational tool. The workshop included training sessions for professionals, educators, and students to learn features of Tracer FIRE, a competition-based digital forensics and assessment tool, developed by Sandia National Laboratories. The results indicate that competitionbased training can be very effective in stimulating students’ motivation to learn. However, extra caution should be taken into account when delivering these types of training workshops. Keywords-component; cyber security, digital forenciscs, partcipatory training workshop, competition-based learning,",
"title": ""
},
{
"docid": "8e2da8870546277443a6da9e4284c0f3",
"text": "Executive functions include abilities of goal formation, planning, carrying out goal-directed plans, and effective performance. This article aims at reviewing some of the current knowledge surrounding executive functioning and presenting the contrasting views regarding this concept. The neural substrates of the executive system are examined as well as the evolution of executive functioning, from development to decline. There is clear evidence of the vulnerability of executive functions to the effects of age over lifespan. The first executive function to emerge in children is the ability to inhibit overlearned behavior and the last to appear is verbal fluency. Inhibition of irrelevant information seems to decline earlier than set shifting and verbal fluency during senescence. The sequential progression and decline of these functions has been paralleled with the anatomical changes of the frontal lobe and its connections with other brain areas. Generalization of the results presented here are limited due to methodological differences across studies. Analysis of these differences is presented and suggestions for future research are offered.",
"title": ""
},
{
"docid": "c36bfde4e2f1cd3a5d6d8c0bcb8806d8",
"text": "A 20/20 vision in ophthalmology implies a perfect view of things that are in front of you. The term is also used to mean a perfect sight of the things to come. Here we focus on a speculative vision of the VLDB in the year 2020. This panel is the follow-up of the one I organised (with S. Navathe) at the Kyoto VLDB in 1986, with the title: \"Anyone for a VLDB in the Year 2000?\". In that panel, the members discussed the major advances made in the database area and conjectured on its future, following a concern of many researchers that the database area was running out of interesting research topics and therefore it might disappear into other research topics, such as software engineering, operating systems and distributed systems. That did not happen.",
"title": ""
},
{
"docid": "c551575e68a8061461dc6c78b76a0386",
"text": "Recently, scene text detection has become an active research topic in computer vision and document analysis, because of its great importance and significant challenge. However, vast majority of the existing methods detect text within local regions, typically through extracting character, word or line level candidates followed by candidate aggregation and false positive elimination, which potentially exclude the effect of wide-scope and long-range contextual cues in the scene. To take full advantage of the rich information available in the whole natural image, we propose to localize text in a holistic manner, by casting scene text detection as a semantic segmentation problem. The proposed algorithm directly runs on full images and produces global, pixel-wise prediction maps, in which detections are subsequently formed. To better make use of the properties of text, three types of information regarding text region, individual characters and their relationship are estimated, with a single Fully Convolutional Network (FCN) model. With such predictions of text properties, the proposed algorithm can simultaneously handle horizontal, multi-oriented and curved text in real-world natural images. The experiments on standard benchmarks, including ICDAR 2013, ICDAR 2015 and MSRA-TD500, demonstrate that the proposed algorithm substantially outperforms previous state-ofthe-art approaches. Moreover, we report the first baseline result on the recently-released, large-scale dataset COCO-Text. Keywords—Scene text detection, fully convolutional network, holistic prediction, natural images.",
"title": ""
},
{
"docid": "a05eb1631da751562fd25913b578032a",
"text": "In this paper, we examine the intergenerational gaming practices of four generations of console gamers, from ages 3 to 83 and, in particular, the roles that gamers of different generations take on when playing together in groups. Our data highlight the extent to which existing gaming technologies support interactions within collocated intergenerational groups, and our analysis reveals a more generationally flexible suite of roles in these computer-mediated interactions than have been documented by previous studies of more traditional collocated, intergenerational interactions. Finally, we offer implications for game designers who wish to make console games more accessible to intergenerational groups.",
"title": ""
},
{
"docid": "ab0f8feac4000464d406369bea87955a",
"text": "Modern operating system kernels employ address space layout randomization (ASLR) to prevent control-flow hijacking attacks and code-injection attacks. While kernel security relies fundamentally on preventing access to address information, recent attacks have shown that the hardware directly leaks this information. Strictly splitting kernel space and user space has recently been proposed as a theoretical concept to close these side channels. However, this is not trivially possible due to architectural restrictions of the x86 platform. In this paper we present KAISER, a system that overcomes limitations of x86 and provides practical kernel address isolation. We implemented our proof-of-concept on top of the Linux kernel, closing all hardware side channels on kernel address information. KAISER enforces a strict kernel and user space isolation such that the hardware does not hold any information about kernel addresses while running in user mode. We show that KAISER protects against double page fault attacks, prefetch side-channel attacks, and TSX-based side-channel attacks. Finally, we demonstrate that KAISER has a runtime overhead of only 0.28%.",
"title": ""
},
{
"docid": "f043acf163d787c4a53924515b509aba",
"text": "A two-wheeled self-balancing robot is a special type of wheeled mobile robot, its balance problem is a hot research topic due to its unstable state for controlling. In this paper, human transporter model has been established. Kinematic and dynamic models are constructed and two control methods: Proportional-integral-derivative (PID) and Linear-quadratic regulator (LQR) are implemented to test the system model in which controls of two subsystems: self-balance (preventing system from falling down when it moves forward or backward) and yaw rotation (steering angle regulation when it turns left or right) are considered. PID is used to control both two subsystems, LQR is used to control self-balancing subsystem only. By using simulation in Matlab, two methods are compared and discussed. The theoretical investigations for controlling the dynamic behavior are meaningful for design and fabrication. Finally, the result shows that LQR has a better performance than PID for self-balancing subsystem control.",
"title": ""
},
{
"docid": "56d3545ec63503b743a7a80db012d7e5",
"text": "Concrete objects used to illustrate mathematical ideas are commonly known as manipulatives. Manipulatives are ubiquitous in North American elementary classrooms in the early years, and although they can be beneficial, they do not guarantee learning. In the present study, the authors examined two factors hypothesized to impact second-graders’ learning of place value and regrouping with manipulatives: (a) the sequencing of concrete (base-ten blocks) and abstract (written symbols) representations of the standard addition algorithm; and (b) the level of instructional guidance on the structural relations between the representations. Results from a classroom experiment with second-grade students (N = 87) indicated that place value knowledge increased from pre-test to post-test when the base-ten blocks were presented before the symbols, but only when no instructional guidance was offered. When guidance was given, only students in the symbols-first condition improved their place value knowledge. Students who received instruction increased their understanding of regrouping, irrespective of representational sequence. No effects were found for iterative sequencing of concrete and abstract representations. Practical implications for teaching mathematics with manipulatives are considered.",
"title": ""
},
{
"docid": "3e5e7e38068da120639c3fcc80227bf8",
"text": "The ferric reducing antioxidant power (FRAP) assay was recently adapted to a microplate format. However, microplate-based FRAP (mFRAP) assays are affected by sample volume and composition. This work describes a calibration process for mFRAP assays which yields data free of volume effects. From the results, the molar absorptivity (ε) for the mFRAP assay was 141,698 M(-1) cm(-1) for gallic acid, 49,328 M(-1) cm(-1) for ascorbic acid, and 21,606 M(-1) cm(-1) for ammonium ferrous sulphate. The significance of ε (M(-1) cm(-1)) is discussed in relation to mFRAP assay sensitivity, minimum detectable concentration, and the dimensionless FRAP-value. Gallic acid showed 6.6 mol of Fe(2+) equivalents compared to 2.3 mol of Fe(+2) equivalents for ascorbic acid. Application of the mFRAP assay to Manuka honey samples (rated 5+, 10+, 15+, and 18+ Unique Manuka Factor; UMF) showed that FRAP values (0.54-0.76 mmol Fe(2+) per 100g honey) were strongly correlated with UMF ratings (R(2)=0.977) and total phenols content (R(2) = 0.982)whilst the UMF rating was correlated with the total phenols (R(2) = 0.999). In conclusion, mFRAP assay results were successfully standardised to yield data corresponding to 1-cm spectrophotometer which is useful for quality assurance purposes. The antioxidant capacity of Manuka honey was found to be directly related to the UMF rating.",
"title": ""
},
{
"docid": "3d01cd221fc0cfadf93d1b7295a22dad",
"text": "The multiplication of a sparse matrix by a dense vector (SpMV) is a centerpiece of scientific computing applications: it is the essential kernel for the solution of sparse linear systems and sparse eigenvalue problems by iterative methods. The efficient implementation of the sparse matrix-vector multiplication is therefore crucial and has been the subject of an immense amount of research, with interest renewed with every major new trend in high-performance computing architectures. The introduction of General-Purpose Graphics Processing Units (GPGPUs) is no exception, and many articles have been devoted to this problem.\n With this article, we provide a review of the techniques for implementing the SpMV kernel on GPGPUs that have appeared in the literature of the last few years. We discuss the issues and tradeoffs that have been encountered by the various researchers, and a list of solutions, organized in categories according to common features. We also provide a performance comparison across different GPGPU models and on a set of test matrices coming from various application domains.",
"title": ""
},
{
"docid": "eb4cac4ac288bc65df70f906b674ceb5",
"text": "LPWAN (Low Power Wide Area Networks) technologies have been attracting attention continuously in IoT (Internet of Things). LoRaWAN is present on the market as a LPWAN technology and it has features such as low power consumption, low transceiver chip cost and wide coverage area. In the LoRaWAN, end devices must perform a join procedure for participating in the network. Attackers could exploit the join procedure because it has vulnerability in terms of security. Replay attack is a method of exploiting the vulnerability in the join procedure. In this paper, we propose a attack scenario and a countermeasure against replay attack that may occur in the join request transfer process.",
"title": ""
},
{
"docid": "5d2c1095a34ee582f490f4b0392a3da0",
"text": "We study the problem of online learning to re-rank, where users provide feedback to improve the quality of displayed lists. Learning to rank has been traditionally studied in two settings. In the offline setting, rankers are typically learned from relevance labels of judges. These approaches have become the industry standard. However, they lack exploration, and thus are limited by the information content of offline data. In the online setting, an algorithm can propose a list and learn from the feedback on it in a sequential fashion. Bandit algorithms developed for this setting actively experiment, and in this way overcome the biases of offline data. But they also tend to ignore offline data, which results in a high initial cost of exploration. We propose BubbleRank, a bandit algorithm for re-ranking that combines the strengths of both settings. The algorithm starts with an initial base list and improves it gradually by swapping higher-ranked less attractive items for lower-ranked more attractive items. We prove an upper bound on the n-step regret of BubbleRank that degrades gracefully with the quality of the initial base list. Our theoretical findings are supported by extensive numerical experiments on a large real-world click dataset.",
"title": ""
},
{
"docid": "2aabe5c6f1ccb8dfd241f0c208609738",
"text": "Exposing the weaknesses of neural models is crucial for improving their performance and robustness in real-world applications. One common approach is to examine how input perturbations affect the output. Our analysis takes this to an extreme on natural language processing tasks by removing as many words as possible from the input without changing the model prediction. For question answering and natural language inference, this often reduces the inputs to just one or two words, while model confidence remains largely unchanged. This is an undesireable behavior: the model gets the Right Answer for the Wrong Reason (RAWR). We introduce a simple training technique that mitigates this problem while maintaining performance on regular examples.",
"title": ""
},
{
"docid": "2bd5ca4cbb8ef7eea1f7b2762918d18b",
"text": "Deep convolutional neural networks continue to advance the state-of-the-art in many domains as they grow bigger and more complex. It has been observed that many of the parameters of a large network are redundant, allowing for the possibility of learning a smaller network that mimics the outputs of the large network through a process called Knowledge Distillation. We show, however, that standard Knowledge Distillation is not effective for learning small models for the task of pedestrian detection. To improve this process, we introduce a higher-dimensional hint layer to increase information flow. We also estimate the uncertainty in the outputs of the large network and propose a loss function to incorporate this uncertainty. Finally, we attempt to boost the complexity of the small network without increasing its size by using as input hand-designed features that have been demonstrated to be effective for pedestrian detection. For only a 2.8% increase in miss rate, we have succeeded in training a student network that is 8 times faster and 21 times smaller than the teacher network.",
"title": ""
},
{
"docid": "c70f8bd719642ed818efc5387ffb6b55",
"text": "In this work, we propose a novel framework for privacy-preserving client-distributed machine learning. It is motivated by the desire to achieve differential privacy guarantees in the local model of privacy in a way that satisfies all systems constraints using asynchronous client-server communication and provides attractive model learning properties. We call it “Draw and Discard” because it relies on random sampling of models for load distribution (scalability), which also provides additional server-side privacy protections and improved model quality through averaging. We present the mechanics of client and server components of “Draw and Discard” and demonstrate how the framework can be applied to learning Generalized Linear models. We then analyze the privacy guarantees provided by our approach against several types of adversaries and showcase experimental results that provide evidence for the framework’s viability in practical deployments. We believe our framework is the first deployed distributed machine learning approach that operates in the local privacy model.",
"title": ""
},
{
"docid": "1fa6e8947e8bac6d0c185b2462eebb51",
"text": "In this study, a compact design of a highly efficient, and a high luminosity light-emitting diode (LED)-based visible light communications system is presented, which is capable of providing standard room illumination levels and also widecoverage Ethernet 10BASE-T optical wireless downlink communications over a distance of 2.3 m using commercial white light phosphor LEDs. The measured signal-to-noise ratio of the designed Ethernet system is >45 dB, thus allowing error-free communications with both on–off keying non-return zero and differential Manchester-coded modulation schemes at 10 Mbps. The uplink has been provided via a wireless infra-red link. A comparative study of a point-to-point wired local area network (LAN) and the optical wireless link confirms no discernible differences between them. The design of the transmitter is also shown to be scalable, with the frequency response for driving 25 LEDs being almost the same as driving a single LED. LED driving units are designed to match with the Ethernet sockets (RJ45) that conform to the existing LAN infrastructures (building and portable devices).",
"title": ""
},
{
"docid": "98e78d8fb047140a73f2a43cbe4a1c74",
"text": "Genomics can transform health-care through precision medicine. Plummeting sequencing costs would soon make genome testing affordable to the masses. Compute efficiency, however, has to improve by orders of magnitude to sequence and analyze the raw genome data. Sequencing software used today can take several hundreds to thousands of CPU hours to align reads to a reference sequence. This paper presents GenAx, an accelerator for read alignment, a time-consuming step in genome sequencing. It consists of a seeding and seed-extension accelerator. The latter is based on an innovative automata design that was designed from the ground-up to enable hardware acceleration. Unlike conventional Levenshtein automata, it is string independent and scales quadratically with edit distance, instead of string length. It supports critical features commonly used in sequencing such as affine gap scoring and traceback. GenAx provides a throughput of 4,058K reads/s for Illumina 101 bp reads. GenAx achieves 31.7× speedup over the standard BWA-MEM sequence aligner running on a 56-thread dualsocket 14-core Xeon E5 server processor, while reducing power consumption by 12× and area by 5.6×.",
"title": ""
}
] | scidocsrr |
f7da70def48ed87aa37b7e169aa4f458 | A Practitioners' Guide to Transfer Learning for Text Classification using Convolutional Neural Networks | [
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "091279f6b95594f9418591264d0d7e3c",
"text": "A great deal of research has focused on algorithms for learning features from unlabeled data. Indeed, much progress has been made on benchmark datasets like NORB and CIFAR by employing increasingly complex unsupervised learning algorithms and deep models. In this paper, however, we show that several simple factors, such as the number of hidden nodes in the model, may be more important to achieving high performance than the learning algorithm or the depth of the model. Specifically, we will apply several offthe-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to CIFAR, NORB, and STL datasets using only singlelayer networks. We then present a detailed analysis of the effect of changes in the model setup: the receptive field size, number of hidden nodes (features), the step-size (“stride”) between extracted features, and the effect of whitening. Our results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, we achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features. More surprisingly, our best performance is based on K-means clustering, which is extremely fast, has no hyperparameters to tune beyond the model structure itself, and is very easy to implement. Despite the simplicity of our system, we achieve accuracy beyond all previously published results on the CIFAR-10 and NORB datasets (79.6% and 97.2% respectively). Appearing in Proceedings of the 14 International Conference on Artificial Intelligence and Statistics (AISTATS) 2011, Fort Lauderdale, FL, USA. Volume 15 of JMLR: W&CP 15. Copyright 2011 by the authors.",
"title": ""
},
{
"docid": "a9d0b367d4507bbcee55f4f25071f12e",
"text": "The goal of sentence and document modeling is to accurately represent the meaning of sentences and documents for various Natural Language Processing tasks. In this work, we present Dependency Sensitive Convolutional Neural Networks (DSCNN) as a generalpurpose classification system for both sentences and documents. DSCNN hierarchically builds textual representations by processing pretrained word embeddings via Long ShortTerm Memory networks and subsequently extracting features with convolution operators. Compared with existing recursive neural models with tree structures, DSCNN does not rely on parsers and expensive phrase labeling, and thus is not restricted to sentencelevel tasks. Moreover, unlike other CNNbased models that analyze sentences locally by sliding windows, our system captures both the dependency information within each sentence and relationships across sentences in the same document. Experiment results demonstrate that our approach is achieving state-ofthe-art performance on several tasks, including sentiment analysis, question type classification, and subjectivity classification.",
"title": ""
}
] | [
{
"docid": "b5c7b9f1f57d3d79d3fc8a97eef16331",
"text": "This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset [36], and object category detection, where we out-perform Aubry et al. [3] for \"chair\" detection on a subset of the Pascal VOC dataset.",
"title": ""
},
{
"docid": "22992fe4908ebcf8ae9f22f3ea2d5a27",
"text": "This paper contains a comparison of common, simple thresholding methods. Basic thresholding, two-band thresholding, optimal thresholding (Calvard Riddler), adaptive thresholding, and p-tile thresholding is compared. The different thresholding methods has been implemented in the programming language c, using the image analysis library Xite. The program sources should accompany this paper. 1 Methods of thresholding Basic thresholding. Basic thresholding is done by visiting each pixel site in the image, and set the pixel to maximum value if its value is above or equal to a given threshold value and to the minimum value if the threshold value is below the pixels value. Basic thresholding is often used as a step in other thresholding algorithms. Implemented by the function threshold in thresholding.h Band thresholding. Band thresholding is similar to basic thresholding, but has two threshold values, and set the pixel site to maximum value if the pixels intensity value is between or at the threshold values, else it it set to minimum. Implemented by the function bandthresholding2 in thresholding.h P-tile thresholding. P-tile is a method for choosing the threshold value to input to the “basic thresholding” algorithm. P-tile means “Percentile”, and the threshold is chosen to be the intensity value where the cumulative sum of pixel intensities is closest to the percentile. Implemented by the function ptileThreshold in thresholding.h Optimal thresholding. Optimal thresholding selects a threshold value that is statistically optimal, based on the contents of the image. Algorithm, due to Calvard and Riddler: http://www.ifi.uio.no/forskning/grupper/dsb/Programvare/Xite/",
"title": ""
},
{
"docid": "1b7d2588cfa229aa3b2501a576be8cf2",
"text": "Hedonia (seeking pleasure and comfort) and eudaimonia (seeking to use and develop the best in oneself) are often seen as opposing pursuits, yet each may contribute to well-being in different ways. We conducted four studies (two correlational, one experience-sampling, and one intervention study) to determine outcomes associated with activities motivated by hedonic and eudaimonic aims. Overall, results indicated that: between persons (at the trait level) and within persons (at the momentary state level), hedonic pursuits related more to positive affect and carefreeness, while eudaimonic pursuits related more to meaning; between persons, eudaimonia related more to elevating experience (awe, inspiration, and sense of connection with a greater whole); within persons, hedonia related more negatively to negative affect; between and within persons, both pursuits related equally to vitality; and both pursuits showed some links with life satisfaction, though hedonia’s links were more frequent. People whose lives were high in both eudaimonia and hedonia had: higher degrees of most well-being variables than people whose lives were low in both pursuits (but did not differ in negative affect or carefreeness); higher positive affect and carefreeness than predominantly eudaimonic individuals; and higher meaning, elevating experience, and vitality than predominantly hedonic individuals. In the intervention study, hedonia produced more well-being benefits at short-term follow-up, while eudaimonia produced more at 3-month follow-up. The findings show that hedonia and eudaimonia occupy both overlapping and distinct niches within a complete picture of wellbeing, and their combination may be associated with the greatest well-being.",
"title": ""
},
{
"docid": "e1239202ebf9b2576344116e72e63a1a",
"text": "urgent need to promote Chinese in this paper we will raise the significance of keyword extraction using a new PAT-treebased approach, which is efficient in automatic keyword extraction from a set of relevant Chinese documents. This approach has been successfully applied in several IR researches, such as document classification, book indexing and relevance feedback. Many Chinese language processing applications therefore step ahead from character level to word/phrase level,",
"title": ""
},
{
"docid": "2366ab0736d4d88cd61a578b9287f9f5",
"text": "Scientific curiosity and fascination have played a key role in human research with psychedelics along with the hope that perceptual alterations and heightened insight could benefit well-being and play a role in the treatment of various neuropsychiatric disorders. These motivations need to be tempered by a realistic assessment of the hurdles to be cleared for therapeutic use. Development of a psychedelic drug for treatment of a serious psychiatric disorder presents substantial although not insurmountable challenges. While the varied psychedelic agents described in this chapter share some properties, they have a range of pharmacologic effects that are reflected in the gradation in intensity of hallucinogenic effects from the classical agents to DMT, MDMA, ketamine, dextromethorphan and new drugs with activity in the serotonergic system. The common link seems to be serotonergic effects modulated by NMDA and other neurotransmitter effects. The range of hallucinogens suggest that they are distinct pharmacologic agents and will not be equally safe or effective in therapeutic targets. Newly synthesized specific and selective agents modeled on the legacy agents may be worth considering. Defining therapeutic targets that represent unmet medical need, addressing market and commercial issues, and finding treatment settings to safely test and use such drugs make the human testing of psychedelics not only interesting but also very challenging. This article is part of the Special Issue entitled 'Psychedelics: New Doors, Altered Perceptions'.",
"title": ""
},
{
"docid": "7a945183a38a751052f5bfc80d3d3ff6",
"text": "It is time to reconsider unifying logic and memory. Since most of the transistors on this merged chip will be devoted to memory, it is called 'intelligent RAM'. IRAM is attractive because the gigabit DRAM chip has enough transistors for both a powerful processor and a memory big enough to contain whole programs and data sets. It contains 1024 memory blocks each 1kb wide. It needs more metal layers to accelerate the long lines of 600mm/sup 2/ chips. It may require faster transistors for the high-speed interface of synchronous DRAM. Potential advantages of IRAM include lower memory latency, higher memory bandwidth, lower system power, adjustable memory width and size, and less board space. Challenges for IRAM include high chip yield given processors have not been repairable via redundancy, high memory retention rates given processors usually need higher power than DRAMs, and a fast processor given logic is slower in a DRAM process.",
"title": ""
},
{
"docid": "e2a1ff393ad57ebaa9f3631e7910bab6",
"text": "We apply principles and techniques of recommendation systems to develop a predictive model of customers’ restaurant ratings. Using Yelp’s dataset, we extract collaborative and content based features to identify customer and restaurant profiles. In particular, we implement singular value decomposition, hybrid cascade of K-nearest neighbor clustering, weighted bi-partite graph projection, and several other learning algorithms. Using Root metrics Mean Squared Error and Mean Absolute Error, we then evaluate and compare the algorithms’ performances.",
"title": ""
},
{
"docid": "905027f065ca2efac792e4ec37e8e07b",
"text": "This case, written on the basis of published sources, concerns the decision facing management of Starbucks Canada about how to implement mobile payments. While Starbucks has currently been using a mobile app to accept payments through their proprietary Starbucks card, rival Tim Hortons has recently introduced a more advanced mobile payments solution and the company now has to consider its next moves. The case reviews various aspects of mobile payments technology and platforms that must be understood to make a decision about the best direction for Starbucks Canada.",
"title": ""
},
{
"docid": "046f15ecf1037477b10bfb4fa315c9c9",
"text": "With the rapid proliferation of camera-equipped smart devices (e.g., smartphones, pads, tablets), visible light communication (VLC) over screen-camera links emerges as a novel form of near-field communication. Such communication via smart devices is highly competitive for its user-friendliness, security, and infrastructure-less (i.e., no dependency on WiFi or cellular infrastructure). However, existing approaches mostly focus on improving the transmission speed and ignore the transmission reliability. Considering the interplay between the transmission speed and reliability towards effective end-to-end communication, in this paper, we aim to boost the throughput over screen-camera links by enhancing the transmission reliability. To this end, we propose RDCode, a robust dynamic barcode which enables a novel packet-frame-block structure. Based on the layered structure, we design different error correction schemes at three levels: intra-blocks, inter-blocks and inter-frames, in order to verify and recover the lost blocks and frames. Finally, we implement RDCode and experimentally show that RDCode reaches a high level of transmission reliability (e.g., reducing the error rate to 10%) and yields a at least doubled transmission rate, compared with the existing state-of-the-art approach COBRA.",
"title": ""
},
{
"docid": "30fb0e394f6c4bf079642cd492229b67",
"text": "Although modern communications services are susceptible to third-party eavesdropping via a wide range of possible techniques, law enforcement agencies in the US and other countries generally use one of two technologies when they conduct legally-authorized interception of telephones and other communications traffic. The most common of these, designed to comply with the 1994 Communications Assistance for Law Enforcement Act(CALEA), use a standard interface provided in network switches.\n This paper analyzes the security properties of these interfaces. We demonstrate that the standard CALEA interfaces are vulnerable to a range of unilateral attacks by the intercept target. In particular, because of poor design choices in the interception architecture and protocols, our experiments show it is practical for a CALEA-tapped target to overwhelm the link to law enforcement with spurious signaling messages without degrading her own traffic, effectively preventing call records as well as content from being monitored or recorded. We also identify stop-gap mitigation strategies that partially mitigate some of our identified attacks.",
"title": ""
},
{
"docid": "8b4fbc7fd8f41200731562a92a0c80ce",
"text": "The problem of recognizing mathematical expressions differs significantly from the recognition of standard prose. While in prose significant constraints can be put on the interpretation of a character by the characters immediately preceding and following it, few such simple constraints are present in a mathematical expression. In order to make the problem tractable, effective methods of recognizing mathematical expressions will need to put intelligent constraints on the possible interpretations. The authors present preliminary results on a system for the recognition of both handwritten and typeset mathematical expressions. While previous systems perform character recognition out of context, the current system maintains ambiguity of the characters until context can be used to disambiguate the interpretatiom In addition, the system limits the number of potentially valid interpretations by decomposing the expressions into a sequence of compatible convex regions. The system uses A-star to search for the best possible interpretation of an expression. We provide a new lower bound estimate on the cost to goal that improves performance significantly.",
"title": ""
},
{
"docid": "eb96cd38e634ddb298063dbc26163f52",
"text": "A good representation for arbitrarily complicated data should have the capability of semantic generation, clustering and reconstruction. Previous research has already achieved impressive performance on either one. This paper aims at learning a disentangled representation effective for all of them in an unsupervised way. To achieve all the three tasks together, we learn the forward and inverse mapping between data and representation on the basis of a symmetric adversarial process. In theory, we minimize the upper bound of the two conditional entropy loss between the latent variables and the observations together to achieve the cycle consistency. The newly proposed RepGAN is tested on MNIST, fashionMNIST, CelebA, and SVHN datasets to perform unsupervised or semi-supervised classification, generation and reconstruction tasks. The result demonstrates that RepGAN is able to learn a useful and competitive representation. To the author’s knowledge, our work is the first one to achieve both a high unsupervised classification accuracy and low reconstruction error on MNIST.",
"title": ""
},
{
"docid": "c3df0da617368c2472c76a6c95366338",
"text": "The infinitary propositional logic of here-and-there is important for the theory of answer set programming in view of its relation to strongly equivalent transformations of logic programs. We know a formal system axiomatizing this logic exists, but a proof in that system may include infinitely many formulas. In this note we describe a relationship between the validity of infinitary formulas in the logic of here-and-there and the provability of formulas in some finite deductive systems. This relationship allows us to use finite proofs to justify the validity of infinitary formulas.",
"title": ""
},
{
"docid": "f77107a84778699e088b94c1a75bfd78",
"text": "Nathaniel Kleitman was the first to observe that sleep deprivation in humans did not eliminate the ability to perform neurobehavioral functions, but it did make it difficult to maintain stable performance for more than a few minutes. To investigate variability in performance as a function of sleep deprivation, n = 13 subjects were tested every 2 hours on a 10-minute, sustained-attention, psychomotor vigilance task (PVT) throughout 88 hours of total sleep deprivation (TSD condition), and compared to a control group of n = 15 subjects who were permitted a 2-hour nap every 12 hours (NAP condition) throughout the 88-hour period. PVT reaction time means and standard deviations increased markedly among subjects and within each individual subject in the TSD condition relative to the NAP condition. TSD subjects also had increasingly greater performance variability as a function of time on task after 18 hours of wakefulness. During sleep deprivation, variability in PVT performance reflected a combination of normal timely responses, errors of omission (i.e., lapses), and errors of commission (i.e., responding when no stimulus was present). Errors of omission and errors of commission were highly intercorrelated across deprivation in the TSD condition (r = 0.85, p = 0.0001), suggesting that performance instability is more likely to include compensatory effort than a lack of motivation. The marked increases in PVT performance variability as sleep loss continued supports the \"state instability\" hypothesis, which posits that performance during sleep deprivation is increasingly variable due to the influence of sleep initiating mechanisms on the endogenous capacity to maintain attention and alertness, thereby creating an unstable state that fluctuates within seconds and that cannot be characterized as either fully awake or asleep.",
"title": ""
},
{
"docid": "ba0481ae973970f96f7bf7b1a5461f16",
"text": "WEP is a protocol for securing wireless networks. In the past years, many attacks on WEP have been published, totally breaking WEP’s security. This thesis summarizes all major attacks on WEP. Additionally a new attack, the PTW attack, is introduced, which was partially developed by the author of this document. Some advanced versions of the PTW attack which are more suiteable in certain environments are described as well. Currently, the PTW attack is fastest publicly known key recovery attack against WEP protected networks.",
"title": ""
},
{
"docid": "e79db51ac85ceafba66dddd5c038fbdf",
"text": "Machine learning based anti-phishing techniques are based on various features extracted from different sources. These features differentiate a phishing website from a legitimate one. Features are taken from various sources like URL, page content, search engine, digital certificate, website traffic, etc, of a website to detect it as a phishing or non-phishing. The websites are declared as phishing sites if the heuristic design of the websites matches with the predefined rules. The accuracy of the anti-phishing solution depends on features set, training data and machine learning algorithm. This paper presents a comprehensive analysis of Phishing attacks, their exploitation, some of the recent machine learning based approaches for phishing detection and their comparative study. It provides a better understanding of the phishing problem, current solution space in machine learning domain, and scope of future research to deal with Phishing attacks efficiently using machine learning based approaches.",
"title": ""
},
{
"docid": "ff8fd8bebb7e86b8d636ae528901b57f",
"text": "The ICH quality vision introduced the concept of quality by design (QbD), which requires a greater understanding of the raw material attributes, of process parameters, of their variability and their interactions. Microcrystalline cellulose (MCC) is one of the most important tableting excipients thanks to its outstanding dry binding properties, enabling the manufacture of tablets by direct compression (DC). DC remains the most economical technique to produce large batches of tablets, however its efficacy is directly impacted by the raw material attributes. Therefore excipients' variability and their impact on drug product performance need to be thoroughly understood. To help with this process, this review article gathers prior knowledge on MCC, focuses on its use in DC and lists some of its potential critical material attributes (CMAs).",
"title": ""
},
{
"docid": "22bb6af742b845dea702453b6b14ef3a",
"text": "Errors are prevalent in data sequences, such as GPS trajectories or sensor readings. Existing methods on cleaning sequential data employ a constraint on value changing speeds and perform constraint-based repairing. While such speed constraints are effective in identifying large spike errors, the small errors that do not significantly deviate from the truth and indeed satisfy the speed constraints can hardly be identified and repaired. To handle such small errors, in this paper, we propose a statistical based cleaning method. Rather than declaring a broad constraint of max/min speeds, we model the probability distribution of speed changes. The repairing problem is thus to maximize the likelihood of the sequence w.r.t. the probability of speed changes. We formalize the likelihood-based cleaning problem, show its NP-hardness, devise exact algorithms, and propose several approximate/heuristic methods to trade off effectiveness for efficiency. Experiments on real data sets (in various applications) demonstrate the superiority of our proposal.",
"title": ""
},
{
"docid": "a87e49bd4a49f35099171b89d278c4d9",
"text": "Due to its versatility, copositive optimization receives increasing interest in the Operational Research community, and is a rapidly expanding and fertile field of research. It is a special case of conic optimization, which consists of minimizing a linear function over a cone subject to linear constraints. The diversity of copositive formulations in different domains of optimization is impressive, since problem classes both in the continuous and discrete world, as well as both deterministic and stochastic models are covered. Copositivity appears in local and global optimality conditions for quadratic optimization, but can also yield tighter bounds for NP-hard combinatorial optimization problems. Here some of the recent success stories are told, along with principles, algorithms and applications.",
"title": ""
},
{
"docid": "878d0072a8881fe010f403a30f758725",
"text": "This paper reviews the current status of Learning Analytics with special focus on their application in Serious Games. After presenting the advantages of incorporating Learning Analytics into game-based learning applications, different aspects regarding the integration process including modeling, tracing, aggregation, visualisation, analysis and employment of gameplay data are discussed. Associated challenges in this field as well as examples of best practices are also examined.",
"title": ""
}
] | scidocsrr |
0d8518b00f13804a3447f760d6226cf5 | Net generation students: agency and choice and the new technologies | [
{
"docid": "9d6a0b31bf2b64f1ec624222a2222e2a",
"text": "This is the translation of a paper by Marc Prensky, the originator of the famous metaphor digital natives digital immigrants. Here, ten years after the birth of that successful metaphor, Prensky outlines that, while the distinction between digital natives and immigrants will progressively become less important, new concepts will be needed to represent the continuous evolution of the relationship between man and digital technologies. In this paper Prensky introduces the concept of digital wisdom, a human quality which develops as a result of the empowerment that the natural human skills can receive through a creative and clever use of digital technologies. KEY-WORDS Digital natives, digital immigrants, digital wisdom, digital empowerment. Prensky M. (2010). H. Sapiens Digitale: dagli Immigrati digitali e nativi digitali alla saggezza digitale. TD-Tecnologie Didattiche, 50, pp. 17-24 17 I problemi del mondo d’oggi non possono essere risolti facendo ricorso allo stesso tipo di pensiero che li ha creati",
"title": ""
}
] | [
{
"docid": "d02aa6e16a8d9d4fd0592b9c4c7fbad5",
"text": "This paper proposes a novel neural network (NN) training method that employs the hybrid exponential smoothing method and the Levenberg-Marquardt (LM) algorithm, which aims to improve the generalization capabilities of previously used methods for training NNs for short-term traffic flow forecasting. The approach uses exponential smoothing to preprocess traffic flow data by removing the lumpiness from collected traffic flow data, before employing a variant of the LM algorithm to train the NN weights of an NN model. This approach aids NN training, as the preprocessed traffic flow data are more smooth and continuous than the original unprocessed traffic flow data. The proposed method was evaluated by forecasting short-term traffic flow conditions on the Mitchell freeway in Western Australia. With regard to the generalization capabilities for short-term traffic flow forecasting, the NN models developed using the proposed approach outperform those that are developed based on the alternative tested algorithms, which are particularly designed either for short-term traffic flow forecasting or for enhancing generalization capabilities of NNs.",
"title": ""
},
{
"docid": "84a258c59b5f4e576763c0c90426c475",
"text": "Analysis of gene and protein name synonyms in Entrez Gene and UniProtKB resources",
"title": ""
},
{
"docid": "ddccad7ce01cad45413e0bcc06ba6668",
"text": "This article highlights the thus far unexplained social and professional effects raised by robotization in surgical applications, and further develops an understanding of social acceptance among professional users of robots in the healthcare sector. It presents findings from ethnographic workplace research on human-robot interactions (HRI) in a population of twenty-three professionals. When considering all the findings, the latest da Vinci system equipped with four robotic arms substitutes two table-side surgical assistants, in contrast to the single-arm AESOP robot that only substitutes one surgical assistant. The adoption of robots and the replacement of surgical assistants provide clear evidence that robots are well-accepted among operating surgeons. Because HRI decrease the operating surgeon’s dependence on social assistance and since they replace the work tasks of surgical assistants, the robot is considered a surrogate artificial work partner and worker. This finding is consistent with prior HRI research indicating that users, through their cooperation with robots, often become less reliant on supportive social actions. This research relates to societal issues and provides the first indication that highly educated knowledge workers are beginning to be replaced by robot technology in working life and therefore points towards a paradigm shift in the service sector.",
"title": ""
},
{
"docid": "e487efba10df1b548d897d95b348bed2",
"text": "Threats of distributed denial of service (DDoS) attacks have been increasing day-by-day due to rapid development of computer networks and associated infrastructure, and millions of software applications, large and small, addressing all varieties of tasks. Botnets pose a major threat to network security as they are widely used for many Internet crimes such as DDoS attacks, identity theft, email spamming, and click fraud. Botnet based DDoS attacks are catastrophic to the victim network as they can exhaust both network bandwidth and resources of the victim machine. This survey presents a comprehensive overview of DDoS attacks, their causes, types with a taxonomy, and technical details of various attack launching tools. A detailed discussion of several botnet architectures, tools developed using botnet architectures, and pros and cons analysis are also included. Furthermore, a list of important issues and research challenges is also reported.",
"title": ""
},
{
"docid": "fc1b3f7da0812465b7ff57a65e36bf3c",
"text": "We describe N–body networks, a neural network architecture for learning the behavior and properties of complex many body physical systems. Our specific application is to learn atomic potential energy surfaces for use in molecular dynamics simulations. Our architecture is novel in that (a) it is based on a hierarchical decomposition of the many body system into subsytems (b) the activations of the network correspond to the internal state of each subsystem (c) the “neurons” in the network are constructed explicitly so as to guarantee that each of the activations is covariant to rotations (d) the neurons operate entirely in Fourier space, and the nonlinearities are realized by tensor products followed by Clebsch–Gordan decompositions. As part of the description of our network, we give a characterization of what way the weights of the network may interact with the activations so as to ensure that the covariance property is maintained.",
"title": ""
},
{
"docid": "40c8f5a117fbd7ec7197621b08b78caf",
"text": "In this thesis I will investigate the potential use of pre-existing Volumetric Variational Auto-Encoder architectures for object in-filling and de-noising. From the experiments presetned here it can be seen that even with relatively simple architectures, complex and varied noises can be repaired by learning generative latent spaces from training with data augmentation. For further improving the VAE's predictive abilities, I propose two novel redefinition of the Variational Bayes Auto-Encoder architecture for management of partial, semantically scaled input samples. The Located-VAE (LVAE) and Prior-VAE (PVAE) are extensions of variational reconstruction networks that attempt to connect real-world sliding window object segments to a latent space of known 3D objects for classification and prediction. Their predictive abilities are shown visually through use of the Classification and Prediction through Auto-Encoder Network (CaPtAEN) application for basic reconstruction tasks, as well as reconstruction with varying noise qualities at input. The classification abilities are demonstrated empirically through comparison of latent space representations of segments taken from the same object. Finally, we argue that although voxel models are visually interesting to work with, the computational complexity and massive sparsity are prohibitive for working with high-resolution models and prevent learning of structured high-level 3D filters. The lack of filter descriptiveness is visually explained using the application presented in this work.",
"title": ""
},
{
"docid": "75ce2ccca2afcae56101e141a42ac2a2",
"text": "Hip disarticulation is an amputation through the hip joint capsule, removing the entire lower extremity, with closure of the remaining musculature over the exposed acetabulum. Tumors of the distal and proximal femur were treated by total femur resection; a hip disarticulation sometimes is performance for massive trauma with crush injuries to the lower extremity. This article discusses the design a system for rehabilitation of a patient with bilateral hip disarticulations. The prosthetics designed allowed the patient to do natural gait suspended between parallel articulate crutches with the body weight support between the crutches. The care of this patient was a challenge due to bilateral amputations at such a high level and the special needs of a patient mobility. Keywords— Amputation, prosthesis, mobility,",
"title": ""
},
{
"docid": "4c05d5add4bd2130787fd894ce74323a",
"text": "Although semi-supervised model can extract the event mentions matching frequent event patterns, it suffers much from those event mentions, which match infrequent patterns or have no matching pattern. To solve this issue, this paper introduces various kinds of linguistic knowledge-driven event inference mechanisms to semi-supervised Chinese event extraction. These event inference mechanisms can capture linguistic knowledge from four aspects, i.e. semantics of argument role, compositional semantics of trigger, consistency on coreference events and relevant events, to further recover missing event mentions from unlabeled texts. Evaluation on the ACE 2005 Chinese corpus shows that our event inference mechanisms significantly outperform the refined state-of-the-art semi-supervised Chinese event extraction system in F1-score by 8.5%.",
"title": ""
},
{
"docid": "e82cd7c22668b0c9ed62b4afdf49d1f4",
"text": "This paper presents a tutorial on delta-sigma fractional-N PLLs for frequency synthesis. The presentation assumes the reader has a working knowledge of integer-N PLLs. It builds on this knowledge by introducing the additional concepts required to understand ΔΣ fractional-N PLLs. After explaining the limitations of integerN PLLs with respect to tuning resolution, the paper introduces the delta-sigma fractional-N PLL as a means of avoiding these limitations. It then presents a selfcontained explanation of the relevant aspects of deltasigma modulation, an extension of the well known integerN PLL linearized model to delta-sigma fractional-N PLLs, a design example, and techniques for wideband digital modulation of the VCO within a delta-sigma fractional-N PLL.",
"title": ""
},
{
"docid": "09f2f2184cb064851238a10d1d661b9e",
"text": "The rapid proliferation of information technologies especially the web 2.0 techniques have changed the fundamental ways how things can be done in many areas, including how researchers could communicate and collaborate with each other. The presence of the sheer volume of researcher and topical research information on the Web has led to the problem of information overload. There is a pressing need to develop researcher recommender systems such that users can be provided with personalized recommendations of the researchers they can potentially collaborate with for mutual research benefits. In an academic context, recommending suitable research partners to researchers can facilitate knowledge discovery and exchange, and ultimately improve the research productivity of both sides. Existing expertise recommendation research usually investigates into the expert finding problem from two independent dimensions, namely, the social relations and the common expertise. The main contribution of this paper is that we propose a novel researcher recommendation approach which combines the two dimensions of social relations and common expertise in a unified framework to improve the effectiveness of personalized researcher recommendation. Moreover, how our proposed framework can be applied to the real-world academic contexts is explained based on two case studies.",
"title": ""
},
{
"docid": "1e176f66a29b6bd3dfce649da1a4db9d",
"text": "In just a few years, crowdsourcing markets like Mechanical Turk have become the dominant mechanism for for building \"gold standard\" datasets in areas of computer science ranging from natural language processing to audio transcription. The assumption behind this sea change - an assumption that is central to the approaches taken in hundreds of research projects - is that crowdsourced markets can accurately replicate the judgments of the general population for knowledge-oriented tasks. Focusing on the important domain of semantic relatedness algorithms and leveraging Clark's theory of common ground as a framework, we demonstrate that this assumption can be highly problematic. Using 7,921 semantic relatedness judgements from 72 scholars and 39 crowdworkers, we show that crowdworkers on Mechanical Turk produce significantly different semantic relatedness gold standard judgements than people from other communities. We also show that algorithms that perform well against Mechanical Turk gold standard datasets do significantly worse when evaluated against other communities' gold standards. Our results call into question the broad use of Mechanical Turk for the development of gold standard datasets and demonstrate the importance of understanding these datasets from a human-centered point-of-view. More generally, our findings problematize the notion that a universal gold standard dataset exists for all knowledge tasks.",
"title": ""
},
{
"docid": "1bdc03ef96e5a6f7e947fd6d5a6721a8",
"text": "Semi-supervised bootstrapping techniques for relationship extraction from text iteratively expand a set of initial seed relationships while limiting the semantic drift. We research bootstrapping for relationship extraction using word embeddings to find similar relationships. Experimental results show that relying on word embeddings achieves a better performance on the task of extracting four types of relationships from a collection of newswire documents when compared with a baseline using TFIDF to find similar relationships.",
"title": ""
},
{
"docid": "1ba6f0efdac239fa2cb32064bb743d29",
"text": "This paper presents a new method for determining efficient spatial distributions of police patrol areas. This method employs a traditional maximal covering formulation and an innovative backup covering formulation to provide alternative optimal solutions to police decision makers, and to address the lack of objective quantitative methods for police area design in the literature or in practice. This research demonstrates that operations research methods can be used in police decision making, presents a new backup coverage model that is appropriate for patrol area design, and encourages the integration of geographic information systems and optimal solution procedures. The models and methods are tested with the police geography of Dallas, TX. The optimal solutions are compared with the existing police geography, showing substantial improvement in number of incidents covered as well as total distance traveled.",
"title": ""
},
{
"docid": "e5304e89e53b05b26f144ae5b2859512",
"text": "This paper describes an agent based simulation used to model human actions in belief space, a high-dimensional subset of information space associated with opinions. Using insights from animal collective behavior, we are able to simulate and identify behavior patterns that are similar to nomadic, flocking and stampeding patterns of animal groups. These behaviors have analogous manifestations in human interaction, emerging as solitary explorers, the fashion-conscious, and echo chambers, whose members are only aware of each other. We demonstrate that a small portion of nomadic agents that widely traverse belief space can disrupt a larger population of stampeding agents. We then model the concept of Adversarial Herding, where trolls, adversaries or other bad actors can exploit properties of technologically mediated communication to artificially create self sustaining runaway polarization. We call this condition the Pishkin Effect as it recalls the large scale buffalo stampedes that could be created by native Americans hunters. We then discuss opportunities for system design that could leverage the ability to recognize these negative patterns, and discuss affordances that may disrupt the formation of natural and deliberate echo chambers.",
"title": ""
},
{
"docid": "e4c493697d9bece8daec6b2dd583e6bb",
"text": "High dimensionality of the feature space is one of the most important concerns in text classification problems due to processing time and accuracy considerations. Selection of distinctive features is therefore essential for text classification. This study proposes a novel filter based probabilistic feature selection method, namely distinguishing feature selector (DFS), for text classification. The proposed method is compared with well-known filter approaches including chi square, information gain, Gini index and deviation from Poisson distribution. The comparison is carried out for different datasets, classification algorithms, and success measures. Experimental results explicitly indicate that DFS offers a competitive performance with respect to the abovementioned approaches in terms of classification accuracy, dimension reduction rate and processing time.",
"title": ""
},
{
"docid": "d0a854f4994695fbf521e94f82bd1201",
"text": "S 2018 PLATFORM & POSTER PRESENTATIONS",
"title": ""
},
{
"docid": "16546193b0096392d4f5ebf6ad7d35a8",
"text": "According to the ways to see the real environments, mirror metaphor augmented reality systems can be classified into video see-through virtual mirror displays and reflective half-mirror displays. The two systems have distinctive characteristics and application fields with different types of complexity. In this paper, we introduce a system configuration to implement a prototype of a reflective half-mirror display-based augmented reality system. We also present a two-phase calibration method using an extra camera for the system. Finally, we describe three error sources in the proposed system and show the result of analysis of these errors with several experiments.",
"title": ""
},
{
"docid": "acdd0043b764fe8bb9904ea6ca71e5cf",
"text": "We investigate the task of 2D articulated human pose estimation in unconstrained still images. This is extremely challenging because of variation in pose, anatomy, clothing, and imaging conditions. Current methods use simple models of body part appearance and plausible configurations due to limitations of available training data and constraints on computational expense. We show that such models severely limit accuracy. Building on the successful pictorial structure model (PSM) we propose richer models of both appearance and pose, using state-of-the-art discriminative classifiers without introducing unacceptable computational expense. We introduce a new annotated database of challenging consumer images, an order of magnitude larger than currently available datasets, and demonstrate over 50% relative improvement in pose estimation accuracy over a stateof-the-art method.",
"title": ""
},
{
"docid": "b8d1190ca313019386ed0ffd539a5a93",
"text": "A charge pump that generates positive and negative high voltages with low power-supply voltage and low power consumption was developed. By controlling the body and gate voltage of each transfer HVNMOS, high output voltage can be obtained from a low power-supply voltage. For low power consumption, the clock frequency of the charge pump is varied according to its output voltage. Output voltages of a seven-stage negative charge pump and a five-stage positive charge pump, fabricated with a 0.15- µ m CMOS process, were measured. These measurements show that the developed charge pump achieves the target regulation positive high voltage (+ 6.5 V) and negative high voltage (− 6 V) at low power-supply voltage Vdd of 1.5 V while also achieving low power consumption.",
"title": ""
}
] | scidocsrr |
eef6fdb81d07ee3c02cb0d082b02b290 | A multiple-camera system calibration toolbox using a feature descriptor-based calibration pattern | [
{
"docid": "641f8ac3567d543dd5df40a21629fbd7",
"text": "Virtual immersive environments or telepresence setups often consist of multiple cameras that have to be calibrated. We present a convenient method for doing this. The minimum is three cameras, but there is no upper limit. The method is fully automatic and a freely moving bright spot is the only calibration object. A set of virtual 3D points is made by waving the bright spot through the working volume. Its projections are found with subpixel precision and verified by a robust RANSAC analysis. The cameras do not have to see all points; only reasonable overlap between camera subgroups is necessary. Projective structures are computed via rank-4 factorization and the Euclidean stratification is done by imposing geometric constraints. This linear estimate initializes a postprocessing computation of nonlinear distortion, which is also fully automatic. We suggest a trick on how to use a very ordinary laser pointer as the calibration object. We show that it is possible to calibrate an immersive virtual environment with 16 cameras in less than 60 minutes reaching about 1/5 pixel reprojection error. The method has been successfully tested on numerous multicamera environments using varying numbers of cameras of varying quality.",
"title": ""
}
] | [
{
"docid": "668e72cfb7f1dca5b097ba7df01008b0",
"text": "Detecting PE malware files is now commonly approached using statistical and machine learning models. While these models commonly use features extracted from the structure of PE files, we propose that icons from these files can also help better predict malware. We propose a new machine learning approach to extract information from icons. Our proposed approach consists of two steps: 1) extracting icon features using summary statics, a histogram of gradients (HOG), and a convolutional autoencoder, 2) clustering icons based on the extracted icon features. Using publicly available data and by using machine learning experiments, we show our proposed icon clusters significantly boost the efficacy of malware prediction models. In particular, our experiments show an average accuracy increase of 10 percent when icon clusters are used in the prediction model.",
"title": ""
},
{
"docid": "c4f706ff9ceb514e101641a816ba7662",
"text": "Open set recognition problems exist in many domains. For example in security, new malware classes emerge regularly; therefore malware classication systems need to identify instances from unknown classes in addition to discriminating between known classes. In this paper we present a neural network based representation for addressing the open set recognition problem. In this representation instances from the same class are close to each other while instances from dierent classes are further apart, resulting in statistically signicant improvement when compared to other approaches on three datasets from two dierent domains.",
"title": ""
},
{
"docid": "613f0bf05fb9467facd2e58b70d2b09e",
"text": "The gold standard for improving sensory, motor and or cognitive abilities is long-term training and practicing. Recent work, however, suggests that intensive training may not be necessary. Improved performance can be effectively acquired by a complementary approach in which the learning occurs in response to mere exposure to repetitive sensory stimulation. Such training-independent sensory learning (TISL), which has been intensively studied in the somatosensory system, induces in humans lasting changes in perception and neural processing, without any explicit task training. It has been suggested that the effectiveness of this form of learning stems from the fact that the stimulation protocols used are optimized to alter synaptic transmission and efficacy. TISL provides novel ways to investigate in humans the relation between learning processes and underlying cellular and molecular mechanisms, and to explore alternative strategies for intervention and therapy.",
"title": ""
},
{
"docid": "7190c91917d1e1280010c66139837568",
"text": "GPUs and accelerators have become ubiquitous in modern supercomputing systems. Scientific applications from a wide range of fields are being modified to take advantage of their compute power. However, data movement continues to be a critical bottleneck in harnessing the full potential of a GPU. Data in the GPU memory has to be moved into the host memory before it can be sent over the network. MPI libraries like MVAPICH2 have provided solutions to alleviate this bottleneck using techniques like pipelining. GPUDirect RDMA is a feature introduced in CUDA 5.0, that allows third party devices like network adapters to directly access data in GPU device memory, over the PCIe bus. NVIDIA has partnered with Mellanox to make this solution available for InfiniBand clusters. In this paper, we evaluate the first version of GPUDirect RDMA for InfiniBand and propose designs in MVAPICH2 MPI library to efficiently take advantage of this feature. We highlight the limitations posed by current generation architectures in effectively using GPUDirect RDMA and address these issues through novel designs in MVAPICH2. To the best of our knowledge, this is the first work to demonstrate a solution for internode GPU-to-GPU MPI communication using GPUDirect RDMA. Results show that the proposed designs improve the latency of internode GPU-to-GPU communication using MPI Send/MPI Recv by 69% and 32% for 4Byte and 128KByte messages, respectively. The designs boost the uni-directional bandwidth achieved using 4KByte and 64KByte messages by 2x and 35%, respectively. We demonstrate the impact of the proposed designs using two end-applications: LBMGPU and AWP-ODC. They improve the communication times in these applications by up to 35% and 40%, respectively.",
"title": ""
},
{
"docid": "64cefd949f61afe81fbbb9ca1159dd4a",
"text": "Single carrier frequency division multiple access (SC-FDMA), which utilizes single carrier modulation and frequency domain equalization is a technique that has similar performance and essentially the same overall complexity as those of OFDM, in which high peak-to-average power ratio (PAPR) is a major drawback. An outstanding advantage of SC-FDMA is its lower PAPR due to its single carrier structure. In this paper, we analyze the PAPR of SC-FDMA signals with pulse shaping. We analytically derive the time domain SC-FDMA signals and numerically compare PAPR characteristics using the complementary cumulative distribution function (CCDF) of PAPR. The results show that SC-FDMA signals indeed have lower PAPR compared to those of OFDMA. Comparing the two forms of SC-FDMA, we find that localized FDMA (LFDMA) has higher PAPR than interleaved FDMA (IFDMA) but somewhat lower PAPR than OFDMA. Also noticeable is the fact that pulse shaping increases PAPR",
"title": ""
},
{
"docid": "419f6e534c04e169a998865f71ee9488",
"text": "Stroma in the tumor microenvironment plays a critical role in cancer progression, but how it promotes metastasis is poorly understood. Exosomes are small vesicles secreted by many cell types and enable a potent mode of intercellular communication. Here, we report that fibroblast-secreted exosomes promote breast cancer cell (BCC) protrusive activity and motility via Wnt-planar cell polarity (PCP) signaling. We show that exosome-stimulated BCC protrusions display mutually exclusive localization of the core PCP complexes, Fzd-Dvl and Vangl-Pk. In orthotopic mouse models of breast cancer, coinjection of BCCs with fibroblasts dramatically enhances metastasis that is dependent on PCP signaling in BCCs and the exosome component, Cd81 in fibroblasts. Moreover, we demonstrate that trafficking in BCCs promotes tethering of autocrine Wnt11 to fibroblast-derived exosomes. This work reveals an intercellular communication pathway whereby fibroblast exosomes mobilize autocrine Wnt-PCP signaling to drive BCC invasive behavior.",
"title": ""
},
{
"docid": "b6303ae2b77ac5c187694d5320ef65ff",
"text": "Mechanisms for continuously changing or shifting a system's attack surface are emerging as game-changers in cyber security. In this paper, we propose a novel defense mechanism for protecting the identity of nodes in Mobile Ad Hoc Networks and defeat the attacker's reconnaissance efforts. The proposed mechanism turns a classical attack mechanism - Sybil - into an effective defense mechanism, with legitimate nodes periodically changing their virtual identity in order to increase the uncertainty for the attacker. To preserve communication among legitimate nodes, we modify the network layer by introducing (i) a translation service for mapping virtual identities to real identities; (ii) a protocol for propagating updates of a node's virtual identity to all legitimate nodes; and (iii) a mechanism for legitimate nodes to securely join the network. We show that the proposed approach is robust to different types of attacks, and also show that the overhead introduced by the update protocol can be controlled by tuning the update frequency.",
"title": ""
},
{
"docid": "7a8979f96411ef37c079d85c77c03bac",
"text": "Ankle-foot orthoses (AFOs) are orthotic devices that support the movement of the ankles of disabled people, for example, those suffering from hemiplegia or peroneal nerve palsy. We have developed an intelligently controllable AFO (i-AFO) in which the ankle torque is controlled by a compact magnetorheological fluid brake. Gait-control tests with the i-AFO were performed for a patient with flaccid paralysis of the ankles, who has difficulty in voluntary movement of the peripheral part of the inferior limb, and physical limitations on his ankles. By using the i-AFO, his gait control was improved by prevention of drop foot in the swing phase and by forward promotion in the stance phase.",
"title": ""
},
{
"docid": "2c69eb4be7bc2bed32cfbbbe3bc41a5d",
"text": "The Sapienza University Networking framework for underwater Simulation Emulation and real-life Testing (SUNSET) is a toolkit for the implementation and testing of protocols for underwater sensor networks. SUNSET enables a radical new way of performing experimental research on underwater communications. It allows protocol designers and implementors to easily realize their solutions and to evaluate their performance through simulation, in-lab emulation and trials at sea in a direct and transparent way, and independently of specific underwater hardware platforms. SUNSET provides a complete toolchain of predeployment and deployment time tools able to identify risks, malfunctioning and under-performing solutions before incurring the expense of going to sea. Novel underwater systems can therefore be rapidly and easily investigated. Heterogeneous underwater communication technologies from different vendors can be used, allowing the evaluation of the impact of different combinations of hardware and software on the overall system performance. Using SUNSET, underwater devices can be reconfigured and controlled remotely in real time, using acoustic links. This allows the performance investigation of underwater systems under different settings and configurations and significantly reduces the cost and complexity of at-sea trials. This paper describes the architectural concept of SUNSET and presents some exemplary results of its use in the field. The SUNSET framework has been extensively validated during more than fifteen at-sea experimental campaigns in the past four years. Several of these have been conducted jointly with the NATO STO Centre for Maritime Research and Experimentation (CMRE) under a collaboration between the University of Rome and CMRE.",
"title": ""
},
{
"docid": "2dc261ab24914dd3f865b8ede5b71be9",
"text": "Twitter has become as much of a news media as a social network, and much research has turned to analyzing its content for tracking real-world events, from politics to sports and natural disasters. This paper describes the techniques we employed for the SNOW Data Challenge 2014, described in [16]. We show that aggressive filtering of tweets based on length and structure, combined with hierarchical clustering of tweets and ranking of the resulting clusters, achieves encouraging results. We present empirical results and discussion for two different Twitter streams focusing on the US presidential elections in 2012 and the recent events about Ukraine, Syria and the Bitcoin, in February 2014.",
"title": ""
},
{
"docid": "4804b3e0b8c2633ab0949bd98f900bb5",
"text": "Secure Simple Pairing (SSP), a characteristic of the Bluetooth Core Version 2.1 specification was build to address two foremost concerns amongst the Bluetooth user community: security and simplicity of the pairing process. It utilizes Elliptic Curve Diffie-Hellmen (ECDH) protocol for generating keys for the first time in Bluetooth pairing. It provides the security properties known session key security, forward security, resistance to key-compromise impersonation attack and to unknown key-share attack, key control. This paper presents the simulation and security analysis of Bluetooth pairing protocol for numeric comparison using ECDH in NS2. The protocol also employs SAGEMATH for cryptographic functions.",
"title": ""
},
{
"docid": "1499fd10ee703afd1d5b3ec35defa26b",
"text": "It is challenging to analyze the aerial locomotion of bats because of the complicated and intricate relationship between their morphology and flight capabilities. Developing a biologically inspired bat robot would yield insight into how bats control their body attitude and position through the complex interaction of nonlinear forces (e.g., aerodynamic) and their intricate musculoskeletal mechanism. The current work introduces a biologically inspired soft robot called Bat Bot (B2). The overall system is a flapping machine with 5 Degrees of Actuation (DoA). This work reports on some of the preliminary untethered flights of B2. B2 has a nontrivial morphology and it has been designed after examining several biological bats. Key DoAs, which contribute significantly to bat flight, are picked and incorporated in B2's flight mechanism design. These DoAs are: 1) forelimb flapping motion, 2) forelimb mediolateral motion (folding and unfolding) and 3) hindlimb dorsoventral motion (upward and downward movement).",
"title": ""
},
{
"docid": "f9ee82dcf1cce6d41a7f106436ee3a7d",
"text": "The Automatic Identification System (AIS) is based on VHF radio transmissions of ships' identity, position, speed and heading, in addition to other key parameters. In 2004, the Norwegian Defence Research Establishment (FFI) undertook studies to evaluate if the AIS signals could be detected in low Earth orbit. Since then, the interest in Space-Based AIS reception has grown significantly, and both public and private sector organizations have established programs to study the issue, and demonstrate such a capability in orbit. FFI is conducting two such programs. The objective of the first program was to launch a nano-satellite equipped with an AIS receiver into a near polar orbit, to demonstrate Space-Based AIS reception at high latitudes. The satellite was launched from India 12th July 2010. Even though the satellite has not finished commissioning, the receiver is operated with real-time transmission of received AIS data to the Norwegian Coastal Administration. The second program is an ESA-funded project to operate an AIS receiver on the European Columbus module of the International Space Station. Mounting of the equipment, the NORAIS receiver, was completed in April 2010. Currently, the AIS receiver has operated for more than three months, picking up several million AIS messages from more than 60 000 ship identities. In this paper, we will present experience gained with the space-based AIS systems, highlight aspects of tracking ships throughout their voyage, and comment on possible contributions to port security.",
"title": ""
},
{
"docid": "b954fa908229bdc0e514b2e21246b064",
"text": "The study of small-size animal models, such as the roundworm C. elegans, has provided great insight into several in vivo biological processes, extending from cell apoptosis to neural network computing. The physical manipulation of this micron-sized worm has always been a challenging task. Here, we discuss the applications, capabilities and future directions of a new family of worm manipulation tools, the 'worm chips'. Worm chips are microfabricated devices capable of precisely manipulating single worms or a population of worms and their environment. Worm chips pose a paradigm shift in current methodologies as they are capable of handling live worms in an automated fashion, opening up a new direction in in vivo small-size organism studies.",
"title": ""
},
{
"docid": "94c47638f35abc67c366ceb871898b86",
"text": "The past few years have seen a growing interest in the application\" of three-dimensional image processing. With the increasing demand for 3-D spatial information for tasks of passive navigation[7,12], automatic surveillance[9], aerial cartography\\l0,l3], and inspection in industrial automation, the importance of effective stereo analysis has been made quite clear. A particular challenge is to provide reliable and accurate depth data for input to object or terrain modelling systems (such as [5]. This paper describes an algorithm for such stereo sensing It uses an edge-based line-by-line stereo correlation scheme, and appears to be fast, robust, and parallel implementable. The processing consists of extracting edge descriptions for a stereo pair of images, linking these edges to their nearest neighbors to obtain the edge connectivity structure, correlating the edge descriptions on the basis of local edge properties, then cooperatively removmg those edge correspondences determined to be in error those which violate the connectivity structure of the two images. A further correlation process, using a technique similar to that used for the edges, is applied to the image intensity values over intervals defined by the previous correlation The result of the processing is a full image array disparity map of the scene viewed. Mechanism and Constraints Edge-based stereo uses operators to reduce an image to a depiction of its intensity boundaries, which are then correlated. Area-based stereo uses area windowing mechanisms to measure local statistical properties of the intensities, which can then be correlated. The system described here deals, initially, with the former, edges, because of the: a) reduced combinatorics (there are fewer edges than pixels), b) greater accuracy (edges can be positioned to sub-pixel precision, while area positioning precision is inversely proportional to window size, and considerably poorer), and c) more realistic in variance assumptions (area-based analysis presupposes that the photometric properties of a scene arc invariant to viewing position, while edge-based analysis works with the assumption that it is the geometric properties that are invariant to viewing position). Edges are found by a convolution operator They are located at positions in the image where a change in sign of second difference in intensity occurs. A particular operator, the one described here being 1 by 7 pixels in size, measures the directional first difference in intensity at each pixel' Second differences are computed from these, and changes in sign of these second differences are used to interpolate sero crossings (i.e. peaks in first difference). Certain local properties other than position are measured and associated with each edge contrast, image slope, and intensity to either side and links are kept to nearest neighbours above, below, and to the sides. It is these properties that define an edge and provide the basis for the correlation (see the discussions in [1,2]). The correlation is & search for edge correspondence between images Fig. 2 shows the edges found in the two images of fig. 1 with the second difference operator (note, all stereo pairs in this paper are drawn for cross-eyed viewing) Although the operator works in both horizontal and vertical directions, it only allows correlation on edges whose horizontal gradient lies above the noise one standard deviation of the first difference in intensity With no prior knowledge of the viewing situation, one could have any edge in one image matching any edge in the other. By constraining the geometry of the cameras during picture taking one can vastly limit the computation that is required in determining corresponding edges in the two images. Consider fig. 3. If two balanced, equal focal length cameras are arranged with axes parallel, then they can be conceived of as sharing a single common image plane. Any point in the scene will project to two points on that joint image plane (one through each of the two lens centers), the connection of which will produce a line parallel to the baseline between the cameras. Thus corresponding edges in the two images must lie along the tame line in the joint image plane This line is termed an epipolar line. If the baseline between the two cameras happens to be parallel to an axis of the cameras, then the correlation only need consider edges lying along corresponding lines parallel to that axis in the two images. Fig. 3 indicates this camera geometry a geometry which produces rectified The edge operator is simple, basically one dimensional, and is noteworthy only in that it it fast and fairly effective.",
"title": ""
},
{
"docid": "26fef7add5f873aa7ec08bff979ef77c",
"text": "Citation: Nermin Kamal., et al. “Restorability of Teeth: A Numerical Simplified Restorative Decision-Making Chart”. EC Dental Science 17.6 (2018): 961-967. Abstract A decision to extract or to keep a tooth was always a debatable matter in dentistry. Each dental specialty has its own perspective in that regards. Although, real life in the dental clinic showed that the decision is always multi-disciplinary, and that full awareness of all aspects should be there in order to reach to a reliable outcome. This article presents a simple evidence-based clinical chart for the judgment of restorability of teeth for better treatment planning.",
"title": ""
},
{
"docid": "8cff1a60fd0eeb60924333be5641ca83",
"text": "Since Wireless Sensor Networks (WSNs) are composed of a set of sensor nodes that limit resource constraints such as energy constraints, energy consumption in WSNs is one of the challenges of these networks. One of the solutions to reduce energy consumption in WSNs is to use clustering. In clustering, cluster members send their data to their Cluster Head (CH), and the CH after collecting the data, sends them to the Base Station (BS). In clustering, choosing CHs is very important; so many methods have proposed to choose the CH. In this study, a hesitant fuzzy method with three input parameters namely, remaining energy, distance to the BS, distance to the center of cluster is proposed for efficient cluster head selection in WSNs. We define different scenarios and simulate them, then investigate the results of simulation.",
"title": ""
},
{
"docid": "9c74b77e79217602bb21a36a5787ed59",
"text": "Ship detection on spaceborne images has attracted great interest in the applications of maritime security and traffic control. Optical images stand out from other remote sensing images in object detection due to their higher resolution and more visualized contents. However, most of the popular techniques for ship detection from optical spaceborne images have two shortcomings: 1) Compared with infrared and synthetic aperture radar images, their results are affected by weather conditions, like clouds and ocean waves, and 2) the higher resolution results in larger data volume, which makes processing more difficult. Most of the previous works mainly focus on solving the first problem by improving segmentation or classification with complicated algorithms. These methods face difficulty in efficiently balancing performance and complexity. In this paper, we propose a ship detection approach to solving the aforementioned two issues using wavelet coefficients extracted from JPEG2000 compressed domain combined with deep neural network (DNN) and extreme learning machine (ELM). Compressed domain is adopted for fast ship candidate extraction, DNN is exploited for high-level feature representation and classification, and ELM is used for efficient feature pooling and decision making. Extensive experiments demonstrate that, in comparison with the existing relevant state-of-the-art approaches, the proposed method requires less detection time and achieves higher detection accuracy.",
"title": ""
},
{
"docid": "1e25480ef6bd5974fcd806aac7169298",
"text": "Alphabetical ciphers are being used since centuries for inducing confusion in messages, but there are some drawbacks that are associated with Classical alphabetic techniques like concealment of key and plaintext. Here in this paper we will suggest an encryption technique that is a blend of both classical encryption as well as modern technique, this hybrid technique will be superior in terms of security than average Classical ciphers.",
"title": ""
},
{
"docid": "e0eded1237c635af3c762f6bbe5d1b26",
"text": "Locating boundaries between coherent and/or repetitive segments of a time series is a challenging problem pervading many scientific domains. In this paper we propose an unsupervised method for boundary detection, combining three basic principles: novelty, homogeneity, and repetition. In particular, the method uses what we call structure features, a representation encapsulating both local and global properties of a time series. We demonstrate the usefulness of our approach in detecting music structure boundaries, a task that has received much attention in recent years and for which exist several benchmark datasets and publicly available annotations. We find our method to significantly outperform the best accuracies published so far. Importantly, our boundary approach is generic, thus being applicable to a wide range of time series beyond the music and audio domains.",
"title": ""
}
] | scidocsrr |
6e3a1a74ece7e0c49866c42f870f1d8d | Data Integration: The Current Status and the Way Forward | [
{
"docid": "d95cd76008dd65d5d7f00c82bad013d3",
"text": "Though data analysis tools continue to improve, analysts still expend an inordinate amount of time and effort manipulating data and assessing data quality issues. Such \"data wrangling\" regularly involves reformatting data values or layout, correcting erroneous or missing values, and integrating multiple data sources. These transforms are often difficult to specify and difficult to reuse across analysis tasks, teams, and tools. In response, we introduce Wrangler, an interactive system for creating data transformations. Wrangler combines direct manipulation of visualized data with automatic inference of relevant transforms, enabling analysts to iteratively explore the space of applicable operations and preview their effects. Wrangler leverages semantic data types (e.g., geographic locations, dates, classification codes) to aid validation and type conversion. Interactive histories support review, refinement, and annotation of transformation scripts. User study results show that Wrangler significantly reduces specification time and promotes the use of robust, auditable transforms instead of manual editing.",
"title": ""
},
{
"docid": "c6abeae6e9287f04b472595a47e974ad",
"text": "Data curation is the act of discovering a data source(s) of interest, cleaning and transforming the new data, semantically integrating it with other local data sources, and deduplicating the resulting composite. There has been much research on the various components of curation (especially data integration and deduplication). However, there has been little work on collecting all of the curation components into an integrated end-to-end system. In addition, most of the previous work will not scale to the sizes of problems that we are finding in the field. For example, one web aggregator requires the curation of 80,000 URLs and a second biotech company has the problem of curating 8000 spreadsheets. At this scale, data curation cannot be a manual (human) effort, but must entail machine learning approaches with a human assist only when necessary. This paper describes Data Tamer, an end-to-end curation system we have built at M.I.T. Brandeis, and Qatar Computing Research Institute (QCRI). It expects as input a sequence of data sources to add to a composite being constructed over time. A new source is subjected to machine learning algorithms to perform attribute identification, grouping of attributes into tables, transformation of incoming data and deduplication. When necessary, a human can be asked for guidance. Also, Data Tamer includes a data visualization component so a human can examine a data source at will and specify manual transformations. We have run Data Tamer on three real world enterprise curation problems, and it has been shown to lower curation cost by about 90%, relative to the currently deployed production software. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6th Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.",
"title": ""
}
] | [
{
"docid": "0f3cad05c9c267f11c4cebd634a12c59",
"text": "The recent, exponential rise in adoption of the most disparate Internet of Things (IoT) devices and technologies has reached also Agriculture and Food (Agri-Food) supply chains, drumming up substantial research and innovation interest towards developing reliable, auditable and transparent traceability systems. Current IoT-based traceability and provenance systems for Agri-Food supply chains are built on top of centralized infrastructures and this leaves room for unsolved issues and major concerns, including data integrity, tampering and single points of failure. Blockchains, the distributed ledger technology underpinning cryptocurrencies such as Bitcoin, represent a new and innovative technological approach to realizing decentralized trustless systems. Indeed, the inherent properties of this digital technology provide fault-tolerance, immutability, transparency and full traceability of the stored transaction records, as well as coherent digital representations of physical assets and autonomous transaction executions. This paper presents AgriBlockIoT, a fully decentralized, blockchain-based traceability solution for Agri-Food supply chain management, able to seamless integrate IoT devices producing and consuming digital data along the chain. To effectively assess AgriBlockIoT, first, we defined a classical use-case within the given vertical domain, namely from-farm-to-fork. Then, we developed and deployed such use-case, achieving traceability using two different blockchain implementations, namely Ethereum and Hyperledger Sawtooth. Finally, we evaluated and compared the performance of both the deployments, in terms of latency, CPU, and network usage, also highlighting their main pros and cons.",
"title": ""
},
{
"docid": "6858c559b78c6f2b5000c22e2fef892b",
"text": "Graph clustering is one of the key techniques for understanding the structures present in graphs. Besides cluster detection, identifying hubs and outliers is also a key task, since they have important roles to play in graph data mining. The structural clustering algorithm SCAN, proposed by Xu et al., is successfully used in many application because it not only detects densely connected nodes as clusters but also identifies sparsely connected nodes as hubs or outliers. However, it is difficult to apply SCAN to large-scale graphs due to its high time complexity. This is because it evaluates the density for all adjacent nodes included in the given graphs. In this paper, we propose a novel graph clustering algorithm named SCAN++. In order to reduce time complexity, we introduce new data structure of directly two-hop-away reachable node set (DTAR). DTAR is the set of two-hop-away nodes from a given node that are likely to be in the same cluster as the given node. SCAN++ employs two approaches for efficient clustering by using DTARs without sacrificing clustering quality. First, it reduces the number of the density evaluations by computing the density only for the adjacent nodes such as indicated by DTARs. Second, by sharing a part of the density evaluations for DTARs, it offers efficient density evaluations of adjacent nodes. As a result, SCAN++ detects exactly the same clusters, hubs, and outliers from large-scale graphs as SCAN with much shorter computation time. Extensive experiments on both real-world and synthetic graphs demonstrate the performance superiority of SCAN++ over existing approaches.",
"title": ""
},
{
"docid": "86ededf9b452bbc51117f5a117247b51",
"text": "An approach to high field control, particularly in the areas near the high voltage (HV) and ground terminals of an outdoor insulator, is proposed using a nonlinear grading material; Zinc Oxide (ZnO) microvaristors compounded with other polymeric materials to obtain the required properties and allow easy application. The electrical properties of the microvaristor compounds are characterised by a nonlinear field-dependent conductivity. This paper describes the principles of the proposed field-control solution and demonstrates the effectiveness of the proposed approach in controlling the electric field along insulator profiles. A case study is carried out for a typical 11 kV polymeric insulator design to highlight the merits of the grading approach. Analysis of electric potential and field distributions on the insulator surface is described under dry clean and uniformly contaminated surface conditions for both standard and microvaristor-graded insulators. The grading and optimisation principles to allow better performance are investigated to improve the performance of the insulator both under steady state operation and under surge conditions. Furthermore, the dissipated power and associated heat are derived to examine surface heating and losses in the grading regions and for the complete insulator. Preliminary tests on inhouse prototype insulators have confirmed better flashover performance of the proposed graded insulator with a 21 % increase in flashover voltage.",
"title": ""
},
{
"docid": "831b153045d9afc8f92336b3ba8019c6",
"text": "The progress in the field of electronics and technology as well as the processing of signals coupled with advance in the use of computer technology has given the opportunity to record and analyze the bio-electric signals from the human body in real time that requires dealing with many challenges according to the nature of the signal and its frequency. This could be up to 1 kHz, in addition to the need to transfer data from more than one channel at the same time. Moreover, another challenge is a high sensitivity and low noise measurements of the acquired bio-electric signals which may be tens of micro volts in amplitude. For these reasons, a low power wireless Electromyography (EMG) data transfer system is designed in order to meet these challenging demands. In this work, we are able to develop an EMG analogue signal processing hardware, along with computer based supporting software. In the development of the EMG analogue signal processing hardware, many important issues have been addressed. Some of these issues include noise and artifact problems, as well as the bias DC current. The computer based software enables the user to analyze the collected EMG data and plot them on graphs for visual decision making. The work accomplished in this study enables users to use the surface EMG device for recording EMG signals for various purposes in movement analysis in medical diagnosis, rehabilitation sports medicine and ergonomics. Results revealed that the proposed system transmit and receive the signal without any losing in the information of signals.",
"title": ""
},
{
"docid": "835b7a2b3d9c457a962e6b432665c7ce",
"text": "In this paper we investigate the feasibility of using synthetic data to augment face datasets. In particular, we propose a novel generative adversarial network (GAN) that can disentangle identity-related attributes from non-identity-related attributes. This is done by training an embedding network that maps discrete identity labels to an identity latent space that follows a simple prior distribution, and training a GAN conditioned on samples from that distribution. Our proposed GAN allows us to augment face datasets by generating both synthetic images of subjects in the training set and synthetic images of new subjects not in the training set. By using recent advances in GAN training, we show that the synthetic images generated by our model are photo-realistic, and that training with augmented datasets can indeed increase the accuracy of face recognition models as compared with models trained with real images alone.",
"title": ""
},
{
"docid": "495be81dda82d3e4d90a34b6716acf39",
"text": "Botnets such as Conficker and Torpig utilize high entropy domains for fluxing and evasion. Bots may query a large number of domains, some of which may fail. In this paper, we present techniques where the failed domain queries (NXDOMAIN) may be utilized for: (i) Speeding up the present detection strategies which rely only on successful DNS domains. (ii) Detecting Command and Control (C&C) server addresses through features such as temporal correlation and information entropy of both successful and failed domains. We apply our technique to a Tier-1 ISP dataset obtained from South Asia, and a campus DNS trace, and thus validate our methods by detecting Conficker botnet IPs and other anomalies with a false positive rate as low as 0.02%. Our technique can be applied at the edge of an autonomous system for real-time detection.",
"title": ""
},
{
"docid": "6fdeeea1714d484c596468aea053848f",
"text": "Standard slow start does not work well under large bandwidthdelay product (BDP) networks. We find two causes of this problem in existing three popular operating systems, Linux, FreeBSD and Windows XP. The first cause is that because of the exponential increase of cwnd during standard slow start, heavy packet losses occur. Recovering from heavy packet losses puts extremely high load on end systems which renders the end systems completely unresponsive for a long time, resulting in a long blackout period of no transmission. This problem commonly occurs with the three operating systems. The second cause is that some of proprietary protocol optimizations applied for slow start by these operating systems to relieve the system load happen to slow down the loss recovery followed by slow start. To remedy this problem, we propose a new slow start algorithm, called Hybrid Start (HyStart) that finds a “safe” exit point of slow start at which slow start can finish and safely move to congestion avoidance without causing any heavy packet losses. HyStart uses ACK trains and RTT delay samples to detect whether (1) the forward path is congested or (2) the current size of congestion window has reached the available capacity of the forward path. HyStart is a plug-in to the TCP sender and does not require any change in TCP receivers. We implemented HyStart for TCP-NewReno and TCP-SACK in Linux and compare its performance with five different slow start schemes with the TCP receivers of the three different operating systems in the Internet and also in the lab testbeds. Our results indicate that HyStart works consistently well under diverse network environments including asymmetric links and high and low BDP networks. Especially with different operating system receivers (Windows XP and FreeBSD), HyStart improves the start-up throughput of TCP more than 2 to 3 times.",
"title": ""
},
{
"docid": "4e85039497c60f8241d598628790f543",
"text": "Knowledge management (KM) is a dominant theme in the behavior of contemporary organizations. While KM has been extensively studied in developed economies, it is much less well understood in developing economies, notably those that are characterized by different social and cultural traditions to the mainstream of Western societies. This is notably the case in China. This chapter develops and tests a theoretical model that explains the impact of leadership style and interpersonal trust on the intention of information and knowledge workers in China to share their knowledge with their peers. All the hypotheses are supported, showing that both initiating structure and consideration have a significant effect on employees’ intention to share knowledge through trust building: 28.2% of the variance in employees’ intention to share knowledge is explained. The authors discuss the theoretical contributions of the chapter, identify future research opportunities, and highlight the implications for practicing managers. DOI: 10.4018/978-1-60566-920-5.ch009",
"title": ""
},
{
"docid": "da45568bf2ec4bfe32f927eb54e78816",
"text": "We explore controller input mappings for games using a deformable prototype that combines deformation gestures with standard button input. In study one, we tested discrete gestures using three simple games. We categorized the control schemes as binary (button only), action, and navigation, the latter two named based on the game mechanics mapped to the gestures. We found that the binary scheme performed the best, but gesture-based control schemes are stimulating and appealing. Results also suggest that the deformation gestures are best mapped to simple and natural tasks. In study two, we tested continuous gestures in a 3D racing game using the same control scheme categorization. Results were mostly consistent with study one but showed an improvement in performance and preference for the action control scheme.",
"title": ""
},
{
"docid": "0df2ca944dcdf79369ef5a7424bf3ffe",
"text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.",
"title": ""
},
{
"docid": "375766c4ae473312c73e0487ab57acc8",
"text": "There are three reasons why the asymmetric crooked nose is one of the greatest challenges in rhinoplasty surgery. First, the complexity of the problem is not appreciated by the patient nor understood by the surgeon. Patients often see the obvious deviation of the nose, but not the distinct differences between the right and left sides. Surgeons fail to understand and to emphasize to the patient that each component of the nose is asymmetric. Second, these deformities can be improved, but rarely made flawless. For this reason, patients are told that the result will be all \"-er words,\" better, straighter, cuter, but no \"t-words,\" there is no perfect nor straight. Most surgeons fail to realize that these cases represent asymmetric noses on asymmetric faces with the variable of ipsilateral and contralateral deviations. Third, these cases demand a wide range of sophisticated surgical techniques, some of which have a minimal margin of error. This article offers an in-depth look at analysis, preoperative planning, and surgical techniques available for dealing with the asymmetric crooked nose.",
"title": ""
},
{
"docid": "5e6175d56150485d559d0c1a963e12b8",
"text": "High-resolution depth map can be inferred from a lowresolution one with the guidance of an additional highresolution texture map of the same scene. Recently, deep neural networks with large receptive fields are shown to benefit applications such as image completion. Our insight is that super resolution is similar to image completion, where only parts of the depth values are precisely known. In this paper, we present a joint convolutional neural pyramid model with large receptive fields for joint depth map super-resolution. Our model consists of three sub-networks, two convolutional neural pyramids concatenated by a normal convolutional neural network. The convolutional neural pyramids extract information from large receptive fields of the depth map and guidance map, while the convolutional neural network effectively transfers useful structures of the guidance image to the depth image. Experimental results show that our model outperforms existing state-of-the-art algorithms not only on data pairs of RGB/depth images, but also on other data pairs like color/saliency and color-scribbles/colorized images.",
"title": ""
},
{
"docid": "571a4de4ac93b26d55252dab86e2a0d3",
"text": "Amnestic mild cognitive impairment (MCI) is a degenerative neurological disorder at the early stage of Alzheimer’s disease (AD). This work is a pilot study aimed at developing a simple scalp-EEG-based method for screening and monitoring MCI and AD. Specifically, the use of graphical analysis of inter-channel coherence of resting EEG for the detection of MCI and AD at early stages is explored. Resting EEG records from 48 age-matched subjects (mean age 75.7 years)—15 normal controls (NC), 16 with early-stage MCI, and 17 with early-stage AD—are examined. Network graphs are constructed using pairwise inter-channel coherence measures for delta–theta, alpha, beta, and gamma band frequencies. Network features are computed and used in a support vector machine model to discriminate among the three groups. Leave-one-out cross-validation discrimination accuracies of 93.6% for MCI vs. NC (p < 0.0003), 93.8% for AD vs. NC (p < 0.0003), and 97.0% for MCI vs. AD (p < 0.0003) are achieved. These results suggest the potential for graphical analysis of resting EEG inter-channel coherence as an efficacious method for noninvasive screening for MCI and early AD.",
"title": ""
},
{
"docid": "97b212bb8fde4859e368941a4e84ba90",
"text": "What appears to be a simple pattern of results—distributed-study opportunities usually produce bettermemory thanmassed-study opportunities—turns out to be quite complicated.Many ‘‘impostor’’ effects such as rehearsal borrowing, strategy changes during study, recency effects, and item skipping complicate the interpretation of spacing experiments. We suggest some best practices for future experiments that diverge from the typical spacing experiments in the literature. Next, we outline themajor theories that have been advanced to account for spacing studies while highlighting the critical experimental evidence that a theory of spacingmust explain. We then propose a tentative verbal theory based on the SAM/REMmodel that utilizes contextual variability and study-phase retrieval to explain the major findings, as well as predict some novel results. Next, we outline the major phenomena supporting testing as superior to restudy on long-term retention tests, and review theories of the testing phenomenon, along with some possible boundary conditions. Finally, we suggest some ways that spacing and testing can be integrated into the classroom, and ask to what extent educators already capitalize on these phenomena. Along the way, we present several new experiments that shed light on various facets of the spacing and testing effects.",
"title": ""
},
{
"docid": "af0df66f001ffd9601ac3c89edf6af0f",
"text": "State-of-the-art speech recognition systems rely on fixed, handcrafted features such as mel-filterbanks to preprocess the waveform before the training pipeline. In this paper, we study end-toend systems trained directly from the raw waveform, building on two alternatives for trainable replacements of mel-filterbanks that use a convolutional architecture. The first one is inspired by gammatone filterbanks (Hoshen et al., 2015; Sainath et al, 2015), and the second one by the scattering transform (Zeghidour et al., 2017). We propose two modifications to these architectures and systematically compare them to mel-filterbanks, on the Wall Street Journal dataset. The first modification is the addition of an instance normalization layer, which greatly improves on the gammatone-based trainable filterbanks and speeds up the training of the scattering-based filterbanks. The second one relates to the low-pass filter used in these approaches. These modifications consistently improve performances for both approaches, and remove the need for a careful initialization in scattering-based trainable filterbanks. In particular, we show a consistent improvement in word error rate of the trainable filterbanks relatively to comparable mel-filterbanks. It is the first time end-to-end models trained from the raw signal significantly outperform mel-filterbanks on a large vocabulary task under clean recording conditions.",
"title": ""
},
{
"docid": "a2f4005c681554cc422b11a6f5087793",
"text": "Emerged as salient in the recent home appliance consumer market is a new generation of home cleaning robot featuring the capability of Simultaneous Localization and Mapping (SLAM). SLAM allows a cleaning robot not only to selfoptimize its work paths for efficiency but also to self-recover from kidnappings for user convenience. By kidnapping, we mean that a robot is displaced, in the middle of cleaning, without its SLAM aware of where it moves to. This paper presents a vision-based kidnap recovery with SLAM for home cleaning robots, the first of its kind, using a wheel drop switch and an upwardlooking camera for low-cost applications. In particular, a camera with a wide-angle lens is adopted for a kidnapped robot to be able to recover its pose on a global map with only a single image. First, the kidnapping situation is effectively detected based on a wheel drop switch. Then, for S. Lee · S. Lee (B) School of Information and Communication Engineering and Department of Interaction Science, Sungkyunkwan University, Suwon, South Korea e-mail: lsh@ece.skku.ac.kr S. Lee e-mail: seongsu.lee@lge.com S. Lee · S. Baek Future IT Laboratory, LG Electronics Inc., Seoul, South Korea e-mail: seungmin2.baek@lge.com an efficient kidnap recovery, a coarse-to-fine approach to matching the image features detected with those associated with a large number of robot poses or nodes, built as a map in graph representation, is adopted. The pose ambiguity, e.g., due to symmetry is taken care of, if any. The final robot pose is obtained with high accuracy from the fine level of the coarse-to-fine hierarchy by fusing poses estimated from a chosen set of matching nodes. The proposed method was implemented as an embedded system with an ARM11 processor on a real commercial home cleaning robot and tested extensively. Experimental results show that the proposed method works well even in the situation in which the cleaning robot is suddenly kidnapped during the map building process.",
"title": ""
},
{
"docid": "b5b7bef8ec2d38bb2821dc380a3a49bf",
"text": "Maternal uniparental disomy (UPD) 7 is found in approximately 5% of patients with Silver-Russell syndrome. By a descriptive and comparative clinical analysis of all published cases (more than 60 to date) their phenotype is updated and compared with the clinical findings in patients with Sliver-Russell syndrome (SRS) of either unexplained etiology or epimutations of the imprinting center region 1 (ICR1) on 11p15. The higher frequency of relative macrocephaly and high forehead/frontal bossing makes the face of patients with epimutations of the ICR1 on 11p15 more distinctive than the face of cases with SRS of unexplained etiology or maternal UPD 7. Because of the distinct micrognathia in the latter, their triangular facial gestalt is more pronounced than in the other groups. However, solely by clinical findings patients with maternal UPD 7 cannot be discriminated unambiguously from patients with epimutations of the ICR1 on 11p15 or SRS of unexplained etiology. Therefore, both loss of methylation of the ICR1 on 11p15 and maternal UPD 7 should be investigated for if SRS is suspected.",
"title": ""
},
{
"docid": "cf8cdd70dde3f55ed097972be1d2fde7",
"text": "BACKGROUND\nText-based patient medical records are a vital resource in medical research. In order to preserve patient confidentiality, however, the U.S. Health Insurance Portability and Accountability Act (HIPAA) requires that protected health information (PHI) be removed from medical records before they can be disseminated. Manual de-identification of large medical record databases is prohibitively expensive, time-consuming and prone to error, necessitating automatic methods for large-scale, automated de-identification.\n\n\nMETHODS\nWe describe an automated Perl-based de-identification software package that is generally usable on most free-text medical records, e.g., nursing notes, discharge summaries, X-ray reports, etc. The software uses lexical look-up tables, regular expressions, and simple heuristics to locate both HIPAA PHI, and an extended PHI set that includes doctors' names and years of dates. To develop the de-identification approach, we assembled a gold standard corpus of re-identified nursing notes with real PHI replaced by realistic surrogate information. This corpus consists of 2,434 nursing notes containing 334,000 words and a total of 1,779 instances of PHI taken from 163 randomly selected patient records. This gold standard corpus was used to refine the algorithm and measure its sensitivity. To test the algorithm on data not used in its development, we constructed a second test corpus of 1,836 nursing notes containing 296,400 words. The algorithm's false negative rate was evaluated using this test corpus.\n\n\nRESULTS\nPerformance evaluation of the de-identification software on the development corpus yielded an overall recall of 0.967, precision value of 0.749, and fallout value of approximately 0.002. On the test corpus, a total of 90 instances of false negatives were found, or 27 per 100,000 word count, with an estimated recall of 0.943. Only one full date and one age over 89 were missed. No patient names were missed in either corpus.\n\n\nCONCLUSION\nWe have developed a pattern-matching de-identification system based on dictionary look-ups, regular expressions, and heuristics. Evaluation based on two different sets of nursing notes collected from a U.S. hospital suggests that, in terms of recall, the software out-performs a single human de-identifier (0.81) and performs at least as well as a consensus of two human de-identifiers (0.94). The system is currently tuned to de-identify PHI in nursing notes and discharge summaries but is sufficiently generalized and can be customized to handle text files of any format. Although the accuracy of the algorithm is high, it is probably insufficient to be used to publicly disseminate medical data. The open-source de-identification software and the gold standard re-identified corpus of medical records have therefore been made available to researchers via the PhysioNet website to encourage improvements in the algorithm.",
"title": ""
},
{
"docid": "1b647a09085a41e66f8c1e3031793fed",
"text": "In this paper we apply distributional semantic information to document-level machine translation. We train monolingual and bilingual word vector models on large corpora and we evaluate them first in a cross-lingual lexical substitution task and then on the final translation task. For translation, we incorporate the semantic information in a statistical document-level decoder (Docent), by enforcing translation choices that are semantically similar to the context. As expected, the bilingual word vector models are more appropriate for the purpose of translation. The final document-level translator incorporating the semantic model outperforms the basic Docent (without semantics) and also performs slightly over a standard sentencelevel SMT system in terms of ULC (the average of a set of standard automatic evaluation metrics for MT). Finally, we also present some manual analysis of the translations of some concrete documents.",
"title": ""
},
{
"docid": "7f2403a849690fb12a184ec67b0a2872",
"text": "Deep reinforcement learning achieves superhuman performance in a range of video game environments, but requires that a designer manually specify a reward function. It is often easier to provide demonstrations of a target behavior than to design a reward function describing that behavior. Inverse reinforcement learning (IRL) algorithms can infer a reward from demonstrations in low-dimensional continuous control environments, but there has been little work on applying IRL to high-dimensional video games. In our CNN-AIRL baseline, we modify the state-of-the-art adversarial IRL (AIRL) algorithm to use CNNs for the generator and discriminator. To stabilize training, we normalize the reward and increase the size of the discriminator training dataset. We additionally learn a low-dimensional state representation using a novel autoencoder architecture tuned for video game environments. This embedding is used as input to the reward network, improving the sample efficiency of expert demonstrations. Our method achieves high-level performance on the simple Catcher video game, substantially outperforming the CNN-AIRL baseline. We also score points on the Enduro Atari racing game, but do not match expert performance, highlighting the need for further work.",
"title": ""
}
] | scidocsrr |
bbc4986971a6a5b4daf955c0991530a2 | A Survey on Deep Learning Toolkits and Libraries for Intelligent User Interfaces | [
{
"docid": "d5faccc7187a185f6e287a7cc29f0878",
"text": "The revival of deep neural networks and the availability of ImageNet laid the foundation for recent success in highly complex recognition tasks. However, ImageNet does not cover all visual concepts of all possible application scenarios. Hence, application experts still record new data constantly and expect the data to be used upon its availability. In this paper, we follow this observation and apply the classical concept of fine-tuning deep neural networks to scenarios where data from known or completely new classes is continuously added. Besides a straightforward realization of continuous fine-tuning, we empirically analyze how computational burdens of training can be further reduced. Finally, we visualize how the network’s attention maps evolve over time which allows for visually investigating what the network learned during continuous fine-tuning.",
"title": ""
},
{
"docid": "d053f8b728f94679cd73bc91193f0ba6",
"text": "Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.",
"title": ""
}
] | [
{
"docid": "e602ab2a2d93a8912869ae8af0925299",
"text": "Software-based MMU emulation lies at the heart of outof-VM live memory introspection, an important technique in the cloud setting that applications such as live forensics and intrusion detection depend on. Due to the emulation, the software-based approach is much slower compared to native memory access by the guest VM. The slowness not only results in undetected transient malicious behavior, but also inconsistent memory view with the guest; both undermine the effectiveness of introspection. We propose the immersive execution environment (ImEE) with which the guest memory is accessed at native speed without any emulation. Meanwhile, the address mappings used within the ImEE are ensured to be consistent with the guest throughout the introspection session. We have implemented a prototype of the ImEE on Linux KVM. The experiment results show that ImEE-based introspection enjoys a remarkable speed up, performing several hundred times faster than the legacy method. Hence, this design is especially useful for realtime monitoring, incident response and high-intensity introspection.",
"title": ""
},
{
"docid": "902a60b23d65c644877b350c63b86ba8",
"text": "The Internet of Things (IoT) is set to occupy a substantial component of future Internet. The IoT connects sensors and devices that record physical observations to applications and services of the Internet[1]. As a successor to technologies such as RFID and Wireless Sensor Networks (WSN), the IoT has stumbled into vertical silos of proprietary systems, providing little or no interoperability with similar systems. As the IoT represents future state of the Internet, an intelligent and scalable architecture is required to provide connectivity between these silos, enabling discovery of physical sensors and interpretation of messages between the things. This paper proposes a gateway and Semantic Web enabled IoT architecture to provide interoperability between systems, which utilizes established communication and data standards. The Semantic Gateway as Service (SGS) allows translation between messaging protocols such as XMPP, CoAP and MQTT via a multi-protocol proxy architecture. Utilization of broadly accepted specifications such as W3Cs Semantic Sensor Network (SSN) ontology for semantic annotations of sensor data provide semantic interoperability between messages and support semantic reasoning to obtain higher-level actionable knowledge from low-level sensor data.",
"title": ""
},
{
"docid": "e6f8fcdf69ccde7528a3dc60ee0b9907",
"text": "This work provides a forensic analysis method for a directory index in NTFS file system. NTFS employed B-tree indexing for providing efficient storage of many files and fast lookups, which changes in a structure of the directory index when files are operated. As a forensic view point, we observe behaviors of the B-tree to analyze files that once existed in the directory. However, it is difficult to analyze the allocated index entry when the file commands are executed. So, this work treats a forensic method for a directory index, especially when there are a large number of files in the directory. The index entry records are naturally expanded, then we examine how the index entry records are configured in the index tree. And we provide information that how the directory index nodes are changed and how the index entries remain traces in the index entry record with a computer forensic point of view when the files are deleted.",
"title": ""
},
{
"docid": "51dcb89aa02a09a15d41d10a2af0315e",
"text": "In order to combat a variety of pests, pesticides are widely used in fruits. Several extraction procedures (liquid extraction, single drop microextraction, microwave-assisted extraction, pressurized liquid extraction, supercritical fluid extraction, solid-phase extraction, solid-phase microextraction, matrix solid-phase dispersion, and stir bar sorptive extraction) have been reported to determine pesticide residues in fruits and fruit juices. The significant change in recent years is the introduction of the Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) methods in these matrices analysis. A combination of techniques reported the use of new extraction methods and chromatography to provide better quantitative recoveries at low levels. The use of mass spectrometric detectors in combination with liquid and gas chromatography has played a vital role to solve many problems related to food safety. The main attention in this review is on the achievements that have been possible because of the progress in extraction methods and the latest advances and novelties in mass spectrometry, and how these progresses have influenced the best control of food, allowing for an increase in the food safety and quality standards.",
"title": ""
},
{
"docid": "114e2a9d3b502164ad06cbde59b682b6",
"text": "As the emerging field of machine learning, deep learning shows excellent ability in solving complex learning problems. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses significant challenge to construct a high performance implementations of deep learning neural networks. In order to improve the performance as well as to maintain the low power cost, in this paper we design deep learning accelerator unit (DLAU), which is a scalable accelerator architecture for large-scale deep learning networks using field-programmable gate array (FPGA) as the hardware prototype. The DLAU accelerator employs three pipelined processing units to improve the throughput and utilizes tile techniques to explore locality for deep learning applications. Experimental results on the state-of-the-art Xilinx FPGA board demonstrate that the DLAU accelerator is able to achieve up to $36.1 {\\times }$ speedup comparing to the Intel Core2 processors, with the power consumption at 234 mW.",
"title": ""
},
{
"docid": "233427420d0ff900736ca0692b281ed5",
"text": "Machine learning is useful for grid-based crime prediction. Many previous studies have examined factors including time, space, and type of crime, but the geographic characteristics of the grid are rarely discussed, leaving prediction models unable to predict crime displacement. This study incorporates the concept of a criminal environment in grid-based crime prediction modeling, and establishes a range of spatial-temporal features based on 84 types of geographic information by applying the Google Places API to theft data for Taoyuan City, Taiwan. The best model was found to be Deep Neural Networks, which outperforms the popular Random Decision Forest, Support Vector Machine, and K-Near Neighbor algorithms. After tuning, compared to our design’s baseline 11-month moving average, the F1 score improves about 7% on 100-by-100 grids. Experiments demonstrate the importance of the geographic feature design for improving performance and explanatory ability. In addition, testing for crime displacement also shows that our model design outperforms the baseline.",
"title": ""
},
{
"docid": "f1212fec5368307451fc6513eadb43ba",
"text": "The finding that very large networks can be trained efficiently and reliably has led to a paradigm shift in computer vision from engineered solutions to learning formulations. As a result, the research challenge shifts from devising algorithms to creating suitable and abundant training data for supervised learning. How to efficiently create such training data? The dominant data acquisition method in visual recognition is based on web data and manual annotation. Yet, for many computer vision problems, such as stereo or optical flow estimation, this approach is not feasible because humans cannot manually enter a pixel-accurate flow field. In this paper, we promote the use of synthetically generated data for the purpose of training deep networks on such tasks. We suggest multiple ways to generate such data and evaluate the influence of dataset properties on the performance and generalization properties of the resulting networks. We also demonstrate the benefit of learning schedules that use different types of data at selected stages of the training process.",
"title": ""
},
{
"docid": "eadba0f4aa52b20b0a512cc3d869146d",
"text": "This paper first describes the phenomenon of Gaussian pulse spread due to numerical dispersion in the finite-difference time-domain (FDTD) method for electromagnetic computation. This effect is undesired, as it reduces the precision with which multipath pulses can be resolved in the time domain. The quantification of the pulse spread is thus useful to evaluate the accuracy of pulsed FDTD simulations. Then, using a linear approximation to the numerical phase delay, a formula to predict the pulse duration is developed. Later, this formula is used to design a Gaussian source that keeps the spread of numerical pulses bounded in wideband FDTD. Finally, the developed model and the approximation are validated via simulations.",
"title": ""
},
{
"docid": "d4f15a40e12d823a943097e08368fec1",
"text": "Wearable or attachable health monitoring smart systems are considered to be the next generation of personal portable devices for remote medicine practices. Smart flexible sensing electronics are components crucial in endowing health monitoring systems with the capability of real-time tracking of physiological signals. These signals are closely associated with body conditions, such as heart rate, wrist pulse, body temperature, blood/intraocular pressure and blood/sweat bio-information. Monitoring such physiological signals provides a convenient and non-invasive way for disease diagnoses and health assessments. This Review summarizes the recent progress of flexible sensing electronics for their use in wearable/attachable health monitoring systems. Meanwhile, we present an overview of different materials and configurations for flexible sensors, including piezo-resistive, piezo-electrical, capacitive, and field effect transistor based devices, and analyze the working principles in monitoring physiological signals. In addition, the future perspectives of wearable healthcare systems and the technical demands on their commercialization are briefly discussed.",
"title": ""
},
{
"docid": "461d0b9ca1d0f1395d98cb18b2f45a0f",
"text": "Semantic maps augment metric-topological maps with meta-information, i.e. semantic knowledge aimed at the planning and execution of high-level robotic tasks. Semantic knowledge typically encodes human-like concepts, like types of objects and rooms, which are connected to sensory data when symbolic representations of percepts from the robot workspace are grounded to those concepts. This symbol grounding is usually carried out by algorithms that individually categorize each symbol and provide a crispy outcome – a symbol is either a member of a category or not. Such approach is valid for a variety of tasks, but it fails at: (i) dealing with the uncertainty inherent to the grounding process, and (ii) jointly exploiting the contextual relations among concepts (e.g. microwaves are usually in kitchens). This work provides a solution for probabilistic symbol grounding that overcomes these limitations. Concretely, we rely on Conditional Random Fields (CRFs) to model and exploit contextual relations, and to provide measurements about the uncertainty coming from the possible groundings in the form of beliefs (e.g. an object can be categorized (grounded) as a microwave or as a nightstand with beliefs 0.6 and 0.4, respectively). Our solution is integrated into a novel semantic map representation called Multiversal Semantic Map (MvSmap ), which keeps the different groundings, or universes, as instances of ontologies annotated with the obtained beliefs for their posterior exploitation. The suitability of our proposal has been proven with the Robot@Home dataset, a repository that contains challenging multi-modal sensory information gathered by a mobile robot in home environments.",
"title": ""
},
{
"docid": "5b0e088e2bddd0535bc9d2dfbfeb0298",
"text": "We had previously shown that regularization principles lead to approximation schemes that are equivalent to networks with one layer of hidden units, called regularization networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known radial basis functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends radial basis functions (RBF) to hyper basis functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of projection pursuit regression, and several types of neural networks. We propose to use the term generalized regularization networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call generalized regularization networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are (1) radial basis functions that can be generalized to hyper basis functions, (2) some tensor product splines, and (3) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions, and several perceptron-like neural networks with one hidden layer.",
"title": ""
},
{
"docid": "3130e666076d119983ac77c5d77d0aed",
"text": "of Ph.D. dissertation, University of Haifa, Israel.",
"title": ""
},
{
"docid": "057a6fc7c761006d49cceea9a05e35e5",
"text": "For large global enterprises, providing adequate resources for organizational acculturation, the process in which employees learn about an organization's culture, remains a challenge. We present results from a survey of 802 users from an enterprise social networking site that identifies two groups of employees (new to the company and geographically distant from headquarters) that perceive higher benefit from using a SNS to learn about the organization's values and beliefs. In addition, we observe regional differences in viewing behaviors between two groups of new employees. These results suggest that a SNS can also potentially contribute to the information-seeking and sense-making activities that underlie organization acculturation.",
"title": ""
},
{
"docid": "357e09114978fc0ac1fb5838b700e6ca",
"text": "Instance level video object segmentation is an important technique for video editing and compression. To capture the temporal coherence, in this paper, we develop MaskRNN, a recurrent neural net approach which fuses in each frame the output of two deep nets for each object instance — a binary segmentation net providing a mask and a localization net providing a bounding box. Due to the recurrent component and the localization component, our method is able to take advantage of long-term temporal structures of the video data as well as rejecting outliers. We validate the proposed algorithm on three challenging benchmark datasets, the DAVIS-2016 dataset, the DAVIS-2017 dataset, and the Segtrack v2 dataset, achieving state-of-the-art performance on all of them.",
"title": ""
},
{
"docid": "a7456ecf7af7e447cdde61f371128965",
"text": "For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist related work, we have made code available at http://github.com/locuslab/TCN.",
"title": ""
},
{
"docid": "8aca118a1171c2c3fd7057468adc84b2",
"text": "Automatically constructing a complete documentary or educational film from scattered pieces of images and knowledge is a significant challenge. Even when this information is provided in an annotated format, the problems of ordering, structuring and animating sequences of images, and producing natural language descriptions that correspond to those images within multiple constraints, are each individually difficult tasks. This paper describes an approach for tackling these problems through a combination of rhetorical structures with narrative and film theory to produce movie-like visual animations from still images along with natural language generation techniques needed to produce text descriptions of what is being seen in the animations. The use of rhetorical structures from NLG is used to integrate separate components for video creation and script generation. We further describe an implementation, named GLAMOUR, that produces actual, short video documentaries, focusing on a cultural heritage domain, and that have been evaluated by professional filmmakers. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "10ca113b333bf891beff38bd84914324",
"text": "In multi-agent, multi-user environments, users as well as agents should have a means of establishing who is talking to whom. In this paper, we present an experiment aimed at evaluating whether gaze directional cues of users could be used for this purpose. Using an eye tracker, we measured subject gaze at the faces of conversational partners during four-person conversations. Results indicate that when someone is listening or speaking to individuals, there is indeed a high probability that the person looked at is the person listened (p=88%) or spoken to (p=77%). We conclude that gaze is an excellent predictor of conversational attention in multiparty conversations. As such, it may form a reliable source of input for conversational systems that need to establish whom the user is speaking or listening to. We implemented our findings in FRED, a multi-agent conversational system that uses eye input to gauge which agent the user is listening or speaking to.",
"title": ""
},
{
"docid": "9d7e520928aa2fdeab7fbfe4fe2258ed",
"text": "Psychomotor stimulants and neuroleptics exert multiple effects on dopaminergic signaling and produce the dopamine (DA)-related behaviors of motor activation and catalepsy, respectively. However, a clear relationship between dopaminergic activity and behavior has been very difficult to demonstrate in the awake animal, thus challenging existing notions about the mechanism of these drugs. The present study examined whether the drug-induced behaviors are linked to a presynaptic site of action, the DA transporter (DAT) for psychomotor stimulants and the DA autoreceptor for neuroleptics. Doses of nomifensine (7 mg/kg i.p.), a DA uptake inhibitor, and haloperidol (0.5 mg/kg i.p.), a dopaminergic antagonist, were selected to examine characteristic behavioral patterns for each drug: stimulant-induced motor activation in the case of nomifensine and neuroleptic-induced catalepsy in the case of haloperidol. Presynaptic mechanisms were quantified in situ from extracellular DA dynamics evoked by electrical stimulation and recorded by voltammetry in the freely moving animal. In the first experiment, the maximal concentration of electrically evoked DA ([DA](max)) measured in the caudate-putamen was found to reflect the local, instantaneous change in presynaptic DAT or DA autoreceptor activity according to the ascribed action of the drug injected. A positive temporal association was found between [DA](max) and motor activation following nomifensine (r=0.99) and a negative correlation was found between [DA](max) and catalepsy following haloperidol (r=-0.96) in the second experiment. Taken together, the results suggest that a dopaminergic presynaptic site is a target of systemically applied psychomotor stimulants and regulates the postsynaptic action of neuroleptics during behavior. This finding was made possible by a voltammetric microprobe with millisecond temporal resolution and its use in the awake animal to assess release and uptake, two key mechanisms of dopaminergic neurotransmission. Moreover, the results indicate that presynaptic mechanisms may play a more important role in DA-behavior relationships than is currently thought.",
"title": ""
},
{
"docid": "8f494ce7965747ab0f90c1543dd3c02e",
"text": "The world is becoming urban. The UN predicts that the world's urban population will almost double from 3·3 billion in 2007 to 6·3 billion in 2050. Most of this increase will be in developing countries. Exponential urban growth is having a profound effect on global health. Because of international travel and migration, cities are becoming important hubs for the transmission of infectious diseases, as shown by recent pandemics. Physicians in urban environments in developing and developed countries need to be aware of the changes in infectious diseases associated with urbanisation. Furthermore, health should be a major consideration in town planning to ensure urbanisation works to reduce the burden of infectious diseases in the future.",
"title": ""
},
{
"docid": "b008f4477ec7bdb80bc88290a57e5883",
"text": "Artificial Neural networks purport to be biomimetic, but are by definition acyclic computational graphs. As a corollary, neurons in artificial nets fire only once and have no time-dynamics. Both these properties contrast with what neuroscience has taught us about human brain connectivity, especially with regards to object recognition. We therefore propose a way to simulate feedback loops in the brain by unrolling loopy neural networks several timesteps, and investigate the properties of these networks. We compare different variants of loops, including multiplicative composition of inputs and additive composition of inputs. We demonstrate that loopy networks outperform deep feedforward networks with the same number of parameters on the CIFAR-10 dataset, as well as nonloopy versions of the same network, and perform equally well on the MNIST dataset. In order to further understand our models, we visualize neurons in loop layers with guided backprop, demonstrating that the same filters behave increasingly nonlinearly at higher unrolling levels. Furthermore, we interpret loops as attention mechanisms, and demonstrate that the composition of the loop output with the input image produces images that look qualitatively like attention maps.",
"title": ""
}
] | scidocsrr |
b41ab3023e56f4e02ba43c74f2495827 | Crystallize: An Immersive, Collaborative Game for Second Language Learning | [
{
"docid": "ae9d14cfbc20eff358ff71322f4cc8eb",
"text": "One of the key challenges of video game design is teaching new players how to play. Although game developers frequently use tutorials to teach game mechanics, little is known about how tutorials affect game learnability and player engagement. Seeking to estimate this value, we implemented eight tutorial designs in three video games of varying complexity and evaluated their effects on player engagement and retention. The results of our multivariate study of over 45,000 players show that the usefulness of tutorials depends greatly on game complexity. Although tutorials increased play time by as much as 29% in the most complex game, they did not significantly improve player engagement in the two simpler games. Our results suggest that investment in tutorials may not be justified for games with mechanics that can be discovered through experimentation.",
"title": ""
}
] | [
{
"docid": "f66d26379c676880ed23e6eb580c3609",
"text": "Molecular mechanics force fields are widely used in computer-aided drug design for the study of drug candidates interacting with biological systems. In these simulations, the biological part is typically represented by a specialized biomolecular force field, while the drug is represented by a matching general (organic) force field. In order to apply these general force fields to an arbitrary drug-like molecule, functionality for assignment of atom types, parameters, and partial atomic charges is required. In the present article, algorithms for the assignment of parameters and charges for the CHARMM General Force Field (CGenFF) are presented. These algorithms rely on the existing parameters and charges that were determined as part of the parametrization of the force field. Bonded parameters are assigned based on the similarity between the atom types that define said parameters, while charges are determined using an extended bond-charge increment scheme. Charge increments were optimized to reproduce the charges on model compounds that were part of the parametrization of the force field. A \"penalty score\" is returned for every bonded parameter and charge, allowing the user to quickly and conveniently assess the quality of the force field representation of different parts of the compound of interest. Case studies are presented to clarify the functioning of the algorithms and the significance of their output data.",
"title": ""
},
{
"docid": "0c42c99a4d80edf11386909a2582459a",
"text": "Robustness or stability of feature selection techniques is a topic of recent interest, and is an important issue when selected feature subsets are subsequently analysed by domain experts to gain more insight into the problem modelled. In this work, we investigate the use of ensemble feature selection techniques, where multiple feature selection methods are combined to yield more robust results. We show that these techniques show great promise for high-dimensional domains with small sample sizes, and provide more robust feature subsets than a single feature selection technique. In addition, we also investigate the effect of ensemble feature selection techniques on classification performance, giving rise to a new model selection strategy.",
"title": ""
},
{
"docid": "52844cb9280029d5ddec869945b28be2",
"text": "In this work, a new fast dynamic community detection algorithm for large scale networks is presented. Most of the previous community detection algorithms are designed for static networks. However, large scale social networks are dynamic and evolve frequently over time. To quickly detect communities in dynamic large scale networks, we proposed dynamic modularity optimizer framework (DMO) that is constructed by modifying well-known static modularity based community detection algorithm. The proposed framework is tested using several different datasets. According to our results, community detection algorithms in the proposed framework perform better than static algorithms when large scale dynamic networks are considered.",
"title": ""
},
{
"docid": "7605ae0f6c5148195caa33c54e8e7a1b",
"text": "Recently Dutch government, as well as many other governments around the world, has digitized a major portion of its public services. With this development electronic services finally arrive at the transaction level. The risks of electronic services on the transactional level are more profound than at the informational level. The public needs to trust the integrity and ‘information management capacities’ of the government or other involved organizations, as well as trust the infrastructure and those managing the infrastructure. In this process, the individual citizen will have to decide to adopt the new electronic government services by weighing its benefits and risks. In this paper, we present a study which aims to identify the role of risk perception and trust in the intention to adopt government e-services. In January 2003, a sample of 238 persons completed a questionnaire. The questionnaire tapped people’s intention to adopt e-government electronic services. Based on previous research and theories on technology acceptance, the questionnaire measured perceived usefulness of e-services, risk perception, worry, perceived behavioural control, subjective norm, trust and experience with e-services. Structural equation modelling was used to further analyze the data (Amos) and to design a theoretical model predicting the individual’s intention to adopt e-services. This analysis showed that the perceived usefulness of electronic services in general is the main determinant of the intention to use e-government services. Risk perception, personal experience, perceived behavioural control and subjective norm were found to significantly predict the perceived usefulness of electronic services in general, while trust in e-government was the main determinant of the perceived usefulness of e-government services. 2006 Elsevier Ltd. All rights reserved. 0747-5632/$ see front matter 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2005.11.003 * Corresponding author. E-mail addresses: Margot.Kuttschreuter@utwente.nl (M. Kuttschreuter), J.M.Gutteling@utwente.nl (J.M. Gutteling). M. Horst et al. / Computers in Human Behavior 23 (2007) 1838–1852 1839",
"title": ""
},
{
"docid": "102ed07783d46a8ebadcad4b30ccb3c8",
"text": "Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.",
"title": ""
},
{
"docid": "7abad18b2ddc66b07267ef76b109d1c9",
"text": "Modern applications for distributed publish/subscribe systems often require stream aggregation capabilities along with rich data filtering. When compared to other distributed systems, aggregation in pub/sub differentiates itself as a complex problem which involves dynamic dissemination paths that are difficult to predict and optimize for a priori, temporal fluctuations in publication rates, and the mixed presence of aggregated and non-aggregated workloads. In this paper, we propose a formalization for the problem of minimizing communication traffic in the context of aggregation in pub/sub. We present a solution to this minimization problem by using a reduction to the well-known problem of minimum vertex cover in a bipartite graph. This solution is optimal under the strong assumption of complete knowledge of future publications. We call the resulting algorithm \"Aggregation Decision, Optimal with Complete Knowledge\" (ADOCK). We also show that under a dynamic setting without full knowledge, ADOCK can still be applied to produce a low, yet not necessarily optimal, communication cost. We also devise a computationally cheaper dynamic approach called \"Aggregation Decision with Weighted Publication\" (WAD). We compare our solutions experimentally using two real datasets and explore the trade-offs with respect to communication and computation costs.",
"title": ""
},
{
"docid": "e9b7eba9f15440ec7112a1938fad1473",
"text": "Recovery is not a new concept within mental health, although in recent times, it has come to the forefront of the policy agenda. However, there is no universal definition of recovery, and it is a contested concept. The aim of this study was to examine the British literature relating to recovery in mental health. Three contributing groups are identified: service users, health care providers and policy makers. A review of the literature was conducted by accessing all relevant published texts. A search was conducted using these terms: 'recovery', 'schizophrenia', 'psychosis', 'mental illness' and 'mental health'. Over 170 papers were reviewed. A thematic analysis was conducted. Six main themes emerged, which were examined from the perspective of the stakeholder groups. The dominant themes were identity, the service provision agenda, the social domain, power and control, hope and optimism, risk and responsibility. Consensus was found around the belief that good quality care should be made available to service users to promote recovery both as inpatient or in the community. However, the manner in which recovery was defined and delivered differed between the groups.",
"title": ""
},
{
"docid": "31bd49d9287ceaead298c4543c5b3c53",
"text": "In this paper, an experimental self-teaching system capable of superimposing audio-visual information to support the process of learning to play the guitar is proposed. Different learning scenarios have been carefully designed according to diverse levels of experience and understanding and are presented in a simple way. Learners can select between representative numbers of scenarios and physically interact with the audio-visual information in a natural way. Audio-visual information can be placed anywhere on a physical space and multiple sound sources can be mixed to experiment with compositions and compilations. To assess the effectiveness of the system some initial evaluation is conducted. Finally conclusions and future work of the system are summarized. Categories: augmented reality, information visualisation, human-computer interaction, learning.",
"title": ""
},
{
"docid": "37d2671c9d89ce5a1c1957bd1490f944",
"text": "In some of object recognition problems, labeled data may not be available for all categories. Zero-shot learning utilizes auxiliary information (also called signatures) d escribing each category in order to find a classifier that can recognize samples from categories with no labeled instance . In this paper, we propose a novel semi-supervised zero-shot learning method that works on an embedding space corresponding to abstract deep visual features. We seek a linear transformation on signatures to map them onto the visual features, such that the mapped signatures of the seen classe s are close to labeled samples of the corresponding classes and unlabeled data are also close to the mapped signatures of one of the unseen classes. We use the idea that the rich deep visual features provide a representation space in whic h samples of each class are usually condensed in a cluster. The effectiveness of the proposed method is demonstrated through extensive experiments on four public benchmarks improving the state-of-the-art prediction accuracy on thr ee of them.",
"title": ""
},
{
"docid": "35f6a4ee2364aea9861b7606c8cb7d40",
"text": "The research on robust principal component analysis (RPCA) has been attracting much attention recently. The original RPCA model assumes sparse noise, and use the L1-norm to characterize the error term. In practice, however, the noise is much more complex and it is not appropriate to simply use a certainLp-norm for noise modeling. We propose a generative RPCA model under the Bayesian framework by modeling data noise as a mixture of Gaussians (MoG). The MoG is a universal approximator to continuous distributions and thus our model is able to fit a wide range of noises such as Laplacian, Gaussian, sparse noises and any combinations of them. A variational Bayes algorithm is presented to infer the posterior of the proposed model. All involved parameters can be recursively updated in closed form. The advantage of our method is demonstrated by extensive experiments on synthetic data, face modeling and background subtraction.",
"title": ""
},
{
"docid": "c7f6a99df60e96c98862e366c4bc3646",
"text": "Doppio is a reconfigurable smartwatch with two touch sensitive display faces. The orientation of the top relative to the base and how the top is attached to the base, creates a very large interaction space. We define and enumerate possible configurations, transitions, and manipulations in this space. Using a passive prototype, we conduct an exploratory study to probe how people might use this style of smartwatch interaction. With an instrumented prototype, we conduct a controlled experiment to evaluate the transition times between configurations and subjective preferences. We use the combined results of these two studies to generate a set of characteristics and design considerations for applying this interaction space to smartwatch applications. These considerations are illustrated with a proof-of-concept hardware prototype demonstrating how Doppio interactions can be used for notifications, private viewing, task switching, temporary information access, application launching, application modes, input, and sharing the top.",
"title": ""
},
{
"docid": "6bbc32ecaf54b9a51442f92edbc2604a",
"text": "Artificial bee colony (ABC), an optimization algorithm is a recent addition to the family of population based search algorithm. ABC has taken its inspiration from the collective intelligent foraging behavior of honey bees. In this study we have incorporated golden section search mechanism in the structure of basic ABC to improve the global convergence and prevent to stick on a local solution. The proposed variant is termed as ILS-ABC. Comparative numerical results with the state-of-art algorithms show the performance of the proposal when applied to the set of unconstrained engineering design problems. The simulated results show that the proposed variant can be successfully applied to solve real life problems.",
"title": ""
},
{
"docid": "2dc23ce5b1773f12905ebace6ef221a5",
"text": "With the increasing demand for higher data rates and more reliable service capabilities for wireless devices, wireless service providers are facing an unprecedented challenge to overcome a global bandwidth shortage. Early global activities on beyond fourth-generation (B4G) and fifth-generation (5G) wireless communication systems suggest that millimeter-wave (mmWave) frequencies are very promising for future wireless communication networks due to the massive amount of raw bandwidth and potential multigigabit-per-second (Gb/s) data rates [1]?[3]. Both industry and academia have begun the exploration of the untapped mmWave frequency spectrum for future broadband mobile communication networks. In April 2014, the Brooklyn 5G Summit [4], sponsored by Nokia and the New York University (NYU) WIRELESS research center, drew global attention to mmWave communications and channel modeling. In July 2014, the IEEE 802.11 next-generation 60-GHz study group was formed to increase the data rates to over 20 Gb/s in the unlicensed 60-GHz frequency band while maintaining backward compatibility with the emerging IEEE 802.11ad wireless local area network (WLAN) standard [5].",
"title": ""
},
{
"docid": "34989468dace8410e9b7b68f0fd78a96",
"text": "A novel coplanar waveguide (CPW)-fed triband planar monopole antenna is presented for WLAN/WiMAX applications. The monopole antenna is printed on a substrate with two rectangular corners cut off. The radiator of the antenna is very compact with an area of only 3.5 × 17 mm2, on which two inverted-L slots are etched to achieve three radiating elements so as to produce three resonant modes for triband operation. With simple structure and small size, the measured and simulated results show that the proposed antenna has 10-dB impedance bandwidths of 120 MHz (2.39-2.51 GHz), 340 MHz (3.38-3.72 GHz), and 1450 MHz (4.79-6.24 GHz) to cover all the 2.4/5.2/5.8-GHz WLAN and the 3.5/5.5-GHz WiMAX bands, and good dipole-like radiation characteristics are obtained over the operating bands.",
"title": ""
},
{
"docid": "5a58ab9fe86a4d0693faacfc238fb35c",
"text": "Mobile Cloud Computing (MCC) bridges the gap between limited capabilities of mobile devices and the increasing complexity of mobile applications, by offloading the computational workloads from local devices to the cloud. Current research supports workload offloading through appropriate application partitioning and remote method execution, but generally ignores the impact of wireless network characteristics on such offloading. Wireless data transmissions incurred by remote method execution consume a large amount of additional energy during transmission intervals when the network interface stays in the high-power state, and deferring these transmissions increases the response delay of mobile applications. In this paper, we adaptively balance the tradeoff between energy efficiency and responsiveness of mobile applications by developing application-aware wireless transmission scheduling algorithms. We take both causality and run-time dynamics of application method executions into account when deferring wireless transmissions, so as to minimize the wireless energy cost and satisfy the application delay constraint with respect to the practical system contexts. Systematic evaluations show that our scheme significantly improves the energy efficiency of workload offloading over realistic smartphone applications.",
"title": ""
},
{
"docid": "2f23d51ffd54a6502eea07883709d016",
"text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.",
"title": ""
},
{
"docid": "02d8ad18b07d08084764d124dc74a94c",
"text": "The large number of potential applications from bridging web data with knowledge bases have led to an increase in the entity linking research. Entity linking is the task to link entity mentions in text with their corresponding entities in a knowledge base. Potential applications include information extraction, information retrieval, and knowledge base population. However, this task is challenging due to name variations and entity ambiguity. In this survey, we present a thorough overview and analysis of the main approaches to entity linking, and discuss various applications, the evaluation of entity linking systems, and future directions.",
"title": ""
},
{
"docid": "46563eaa34dd45c861c774bd9f13d1b6",
"text": "The energy constraint is one of the inherent defects of the Wireless Sensor Networks (WSNs). How to prolong the lifespan of the network has attracted more and more attention. Numerous achievements have emerged successively recently. Among these mechanisms designing routing protocols is one of the most promising ones owing to the large amount of energy consumed for data transmission. The background and related works are described firstly in detail in this paper. Then a game model for selecting the Cluster Head is presented. Subsequently, a novel routing protocol named Game theory based Energy Efficient Clustering routing protocol (GEEC) is proposed. GEEC, which belongs to a kind of clustering routing protocols, adopts evolutionary game theory mechanism to achieve energy exhaust equilibrium as well as lifetime extension at the same time. Finally, extensive simulation experiments are conducted. The experimental results indicate that a significant improvement in energy balance as well as in energy conservation compared with other two kinds of well-known clustering routing protocols is achieved.",
"title": ""
},
{
"docid": "027a5da45d41ce5df40f6b342a9e4485",
"text": "GPipe is a scalable pipeline parallelism library that enables learning of giant deep neural networks. It partitions network layers across accelerators and pipelines execution to achieve high hardware utilization. It leverages recomputation to minimize activation memory usage. For example, using partitions over 8 accelerators, it is able to train networks that are 25× larger, demonstrating its scalability. It also guarantees that the computed gradients remain consistent regardless of the number of partitions. It achieves an almost linear speedup without any changes in the model parameters: when using 4× more accelerators, training the same model is up to 3.5× faster. We train a 557 million parameters AmoebaNet model and achieve a new state-ofthe-art 84.3% top-1 / 97.0% top-5 accuracy on ImageNet 2012 dataset. Finally, we use this learned model to finetune multiple popular image classification datasets and obtain competitive results, including pushing the CIFAR-10 accuracy to 99% and CIFAR-100 accuracy to 91.3%.",
"title": ""
},
{
"docid": "a1d0bf0d28bbe3dd568e7e01bc9d59c3",
"text": "A novel coupling technique for circularly polarized annular-ring patch antenna is developed and discussed. The circular polarization (CP) radiation of the annular-ring patch antenna is achieved by a simple microstrip feed line through the coupling of a fan-shaped patch on the same plane of the antenna. Proper positioning of the coupling fan-shaped patch excites two orthogonal resonant modes with 90 phase difference, and a pure circular polarization is obtained. The dielectric material is a cylindrical block of ceramic with a permittivity of 25 and that reduces the size of the antenna. The prototype has been designed and fabricated and found to have an impedance bandwidth of 2.3% and a 3 dB axial-ratio bandwidth of about 0.6% at the center frequency of 2700 MHz. The characteristics of the proposed antenna have been by simulation software HFSS and experiment. The measured and simulated results are in good agreement.",
"title": ""
}
] | scidocsrr |
f3ffbaafd9085526f906a7fb90ac3558 | Fast camera calibration for the analysis of sport sequences | [
{
"docid": "cfadde3d2e6e1d6004e6440df8f12b5a",
"text": "We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses the line markings of the court for calibration and it can be applied to a variety of different sports since the geometric model of the court can be specified by the user. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture restrictions. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the following input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.",
"title": ""
}
] | [
{
"docid": "36759b5da620f3b1c870c65e16aa2b44",
"text": "Frama-C is a source code analysis platform that aims at conducting verification of industrial-size C programs. It provides its users with a collection of plug-ins that perform static analysis, deductive verification, and testing, for safety- and security-critical software. Collaborative verification across cooperating plug-ins is enabled by their integration on top of a shared kernel and datastructures, and their compliance to a common specification language. This foundational article presents a consolidated view of the platform, its main and composite analyses, and some of its industrial achievements.",
"title": ""
},
{
"docid": "76cedf5536bd886b5838c2a5e027de79",
"text": "This article reports a meta-analysis of personality-academic performance relationships, based on the 5-factor model, in which cumulative sample sizes ranged to over 70,000. Most analyzed studies came from the tertiary level of education, but there were similar aggregate samples from secondary and tertiary education. There was a comparatively smaller sample derived from studies at the primary level. Academic performance was found to correlate significantly with Agreeableness, Conscientiousness, and Openness. Where tested, correlations between Conscientiousness and academic performance were largely independent of intelligence. When secondary academic performance was controlled for, Conscientiousness added as much to the prediction of tertiary academic performance as did intelligence. Strong evidence was found for moderators of correlations. Academic level (primary, secondary, or tertiary), average age of participant, and the interaction between academic level and age significantly moderated correlations with academic performance. Possible explanations for these moderator effects are discussed, and recommendations for future research are provided.",
"title": ""
},
{
"docid": "d5f43b7405e08627b7f0930cc1ddd99e",
"text": "Source code duplication, commonly known as code cloning, is considered an obstacle to software maintenance because changes to a cloned region often require consistent changes to other regions of the source code. Research has provided evidence that the elimination of clones may not always be practical, feasible, or cost-effective. We present a clone management approach that describes clone regions in a robust way that is independent from the exact text of clone regions or their location in a file, and that provides support for tracking clones in evolving software. Our technique relies on the concept of abstract clone region descriptors (CRDs), which describe clone regions using a combination of their syntactic, structural, and lexical information. We present our definition of CRDs, and describe a clone tracking system capable of producing CRDs from the output of different clone detection tools, notifying developers of modifications to clone regions, and supporting updates to the documented clone relationships. We evaluated the performance and usefulness of our approach across three clone detection tools and five subject systems, and the results indicate that CRDs are a practical and robust representation for tracking code clones in evolving software.",
"title": ""
},
{
"docid": "c2334008c6a07cbd3b3d89dc01ddc02d",
"text": "Four Cucumber mosaic virus (CMV) (CMV-HM 1–4) and nine Tomato mosaic virus (ToMV) (ToMV AH 1–9) isolates detected in tomato samples collected from different governorates in Egypt during 2014, were here characterized. According to the coat protein gene sequence and to the complete nucleotide sequence of total genomic RNA1, RNA2 and RNA3 of CMV-HM3 the new Egyptian isolates are related to members of the CMV subgroup IB. The nine ToMV Egyptian isolates were characterized by sequence analysis of the coat protein and the movement protein genes. All isolates were grouped within the same branch and showed high relatedness to all considered isolates (98–99%). Complete nucleotide sequence of total genomic RNA of ToMV AH4 isolate was obtained and its comparison showed a closer degree of relatedness to isolate 99–1 from the USA (99%). To our knowledge, this is the first report of CMV isolates from subgroup IB in Egypt and the first full length sequencing of an ToMV Egyptian isolate.",
"title": ""
},
{
"docid": "0ce9e025b0728adc245759580330e7f5",
"text": "We present a unified framework for dense correspondence estimation, called Homography flow, to handle large photometric and geometric deformations in an efficient manner. Our algorithm is inspired by recent successes of the sparse to dense framework. The main intuition is that dense flows located in same plane can be represented as a single geometric transform. Tailored to dense correspondence task, the Homography flow differs from previous methods in the flow domain clustering and the trilateral interpolation. By estimating and propagating sparsely estimated transforms, dense flow field is estimated with very low computation time. The Homography flow highly improves the performance of dense correspondences, especially in flow discontinuous area. Experimental results on challenging image pairs show that our approach suppresses the state-of-the-art algorithms in both accuracy and computation time.",
"title": ""
},
{
"docid": "d94a4f07939c0f420787b099336f426b",
"text": "A next generation of AESA antennas will be challenged with the need for lower size, weight, power and cost (SWAP-C). This leads to enhanced demands especially with regard to the integration density of the RF-part inside a T/R module. The semiconductor material GaN has proven its capacity for high power amplifiers, robust receive components as well as switch components for separation of transmit and receive mode. This paper will describe the design and measurement results of a GaN-based single-chip T/R module frontend (HPA, LNA and SPDT) using UMS GH25 technology and covering the frequency range from 8 GHz to 12 GHz. Key performance parameters of the frontend are 13 W minimum transmit (TX) output power over the whole frequency range with peak power up to 17 W. The frontend in receive (RX) mode has a noise figure below 3.2 dB over the whole frequency range, and can survive more than 5 W input power. The large signal insertion loss of the used SPDT is below 0.9 dB at 43 dBm input power level.",
"title": ""
},
{
"docid": "13cfc33bd8611b3baaa9be37ea9d627e",
"text": "Some of the more difficult to define aspects of the therapeutic process (empathy, compassion, presence) remain some of the most important. Teaching them presents a challenge for therapist trainees and educators alike. In this study, we examine our beginning practicum students' experience of learning mindfulness meditation as a way to help them develop therapeutic presence. Through thematic analysis of their journal entries a variety of themes emerged, including the effects of meditation practice, the ability to be present, balancing being and doing modes in therapy, and the development of acceptance and compassion for themselves and for their clients. Our findings suggest that mindfulness meditation may be a useful addition to clinical training.",
"title": ""
},
{
"docid": "03625364ccde0155f2c061b47e3a00b8",
"text": "The computation of selectional preferences, the admissible argument values for a relation, is a well-known NLP task with broad applicability. We present LDA-SP, which utilizes LinkLDA (Erosheva et al., 2004) to model selectional preferences. By simultaneously inferring latent topics and topic distributions over relations, LDA-SP combines the benefits of previous approaches: like traditional classbased approaches, it produces humaninterpretable classes describing each relation’s preferences, but it is competitive with non-class-based methods in predictive power. We compare LDA-SP to several state-ofthe-art methods achieving an 85% increase in recall at 0.9 precision over mutual information (Erk, 2007). We also evaluate LDA-SP’s effectiveness at filtering improper applications of inference rules, where we show substantial improvement over Pantel et al.’s system (Pantel et al., 2007).",
"title": ""
},
{
"docid": "779fba8ff7f59d3571cfe4c1803671e3",
"text": "This paper describes the design of an indirect current feedback Instrumentation Amplifier (IA). Transistor sizing plays a major role in achieving the desired gain, the Common Mode Rejection Ratio (CMRR) and the bandwidth of the Instrumentation Amplifier. A gm/ID based design methodology is employed to design the functional blocks of the IA. It links the design variables of each functional block to its target specifications and is used to develop design charts that are used to accurately size the transistors. The IA thus designed achieves a voltage gain of 31dB with a bandwidth 1.2MHz and a CMRR of 87dB at 1MHz. The circuit design is carried out using 0.18μm CMOS process.",
"title": ""
},
{
"docid": "b1a508ecaa6fef0583b430fc0074af74",
"text": "Recent past has seen a lot of developments in the field of image-based dietary assessment. Food image classification and recognition are crucial steps for dietary assessment. In the last couple of years, advancements in the deep learning and convolutional neural networks proved to be a boon for the image classification and recognition tasks, specifically for food recognition because of the wide variety of food items. In this paper, we report experiments on food/non-food classification and food recognition using a GoogLeNet model based on deep convolutional neural network. The experiments were conducted on two image datasets created by our own, where the images were collected from existing image datasets, social media, and imaging devices such as smart phone and wearable cameras. Experimental results show a high accuracy of 99.2% on the food/non-food classification and 83.6% on the food category recognition.",
"title": ""
},
{
"docid": "755820a345dea56c4631ee14467e2e41",
"text": "This paper presents a novel six-axis force/torque (F/T) sensor for robotic applications that is self-contained, rugged, and inexpensive. Six capacitive sensor cells are adopted to detect three normal and three shear forces. Six sensor cell readings are converted to F/T information via calibrations and transformation. To simplify the manufacturing processes, a sensor design with parallel and orthogonal arrangements of sensing cells is proposed, which achieves the large improvement of the sensitivity. Also, the signal processing is realized with a single printed circuit board and a ground plate, and thus, we make it possible to build a lightweight six-axis F/T sensor with simple manufacturing processes at extremely low cost. The sensor is manufactured and its performances are validated by comparing them with a commercial six-axis F/T sensor.",
"title": ""
},
{
"docid": "a07338beeb3246954815e0389c59ae29",
"text": "We have proposed gate-all-around Silicon nanowire MOSFET (SNWFET) on bulk Si as an ultimate transistor. Well controlled processes are used to achieve gate length (LG) of sub-10nm and narrow nanowire widths. Excellent performance with reasonable VTH and short channel immunity are achieved owing to thin nanowire channel, self-aligned gate, and GAA structure. Transistor performance with gate length of 10nm has been demonstrated and nanowire size (DNW) dependency of various electrical characteristics has been investigated. Random telegraph noise (RTN) in SNWFET is studied as well.",
"title": ""
},
{
"docid": "17f0fbd3ab3b773b5ef9d636700b5af6",
"text": "Motor sequence learning is a process whereby a series of elementary movements is re-coded into an efficient representation for the entire sequence. Here we show that human subjects learn a visuomotor sequence by spontaneously chunking the elementary movements, while each chunk acts as a single memory unit. The subjects learned to press a sequence of 10 sets of two buttons through trial and error. By examining the temporal patterns with which subjects performed a visuomotor sequence, we found that the subjects performed the 10 sets as several clusters of sets, which were separated by long time gaps. While the overall performance time decreased by repeating the same sequence, the clusters became clearer and more consistent. The cluster pattern was uncorrelated with the distance of hand movements and was different across subjects who learned the same sequence. We then split a learned sequence into three segments, while preserving or destroying the clusters in the learned sequence, and shuffled the segments. The performance on the shuffled sequence was more accurate and quicker when the clusters in the original sequence were preserved than when they were destroyed. The results suggest that each cluster is processed as a single memory unit, a chunk, and is necessary for efficient sequence processing. A learned visuomotor sequence is hierarchically represented as chunks that contain several elementary movements. We also found that the temporal patterns of sequence performance transferred from the nondominant to dominant hand, but not vice versa. This may suggest a role of the dominant hemisphere in storage of learned chunks. Together with our previous unit-recording and imaging studies that used the same learning paradigm, we predict specific roles of the dominant parietal area, basal ganglia, and presupplementary motor area in the chunking.",
"title": ""
},
{
"docid": "2ab8c692ef55d2501ff61f487f91da9c",
"text": "A common discussion subject for the male part of the population in particular, is the prediction of next weekend’s soccer matches, especially for the local team. Knowledge of offensive and defensive skills is valuable in the decision process before making a bet at a bookmaker. In this article we take an applied statistician’s approach to the problem, suggesting a Bayesian dynamic generalised linear model to estimate the time dependent skills of all teams in a league, and to predict next weekend’s soccer matches. The problem is more intricate than it may appear at first glance, as we need to estimate the skills of all teams simultaneously as they are dependent. It is now possible to deal with such inference problems using the iterative simulation technique known as Markov Chain Monte Carlo. We will show various applications of the proposed model based on the English Premier League and Division 1 1997-98; Prediction with application to betting, retrospective analysis of the final ranking, detection of surprising matches and how each team’s properties vary during the season.",
"title": ""
},
{
"docid": "84e71d32b1f40eb59d63a0ec6324d79b",
"text": "Typically a classifier trained on a given dataset (source domain) does not performs well if it is tested on data acquired in a different setting (target domain). This is the problem that domain adaptation (DA) tries to overcome and, while it is a well explored topic in computer vision, it is largely ignored in robotic vision where usually visual classification methods are trained and tested in the same domain. Robots should be able to deal with unknown environments, recognize objects and use them in the correct way, so it is important to explore the domain adaptation scenario also in this context. The goal of the project is to define a benchmark and a protocol for multimodal domain adaptation that is valuable for the robot vision community. With this purpose some of the state-of-the-art DA methods are selected: Deep Adaptation Network (DAN), Domain Adversarial Training of Neural Network (DANN), Automatic Domain Alignment Layers (AutoDIAL) and Adversarial Discriminative Domain Adaptation (ADDA). Evaluations have been done using different data types: RGB only, depth only and RGB-D over the following datasets, designed for the robotic community: RGB-D Object Dataset (ROD), Web Object Dataset (WOD), Autonomous Robot Indoor Dataset (ARID), Big Berkeley Instance Recognition Dataset (BigBIRD) and Active Vision Dataset. Although progresses have been made on the formulation of effective adaptation algorithms and more realistic object datasets are available, the results obtained show that, training a sufficiently good object classifier, especially in the domain adaptation scenario, is still an unsolved problem. Also the best way to combine depth with RGB informations to improve the performance is a point that needs to be investigated more.",
"title": ""
},
{
"docid": "37d353f5b8f0034209f75a3848580642",
"text": "(NR) is the first interactive data repository with a web-based platform for visual interactive analytics. Unlike other data repositories (e.g., UCI ML Data Repository, and SNAP), the network data repository (networkrepository.com) allows users to not only download, but to interactively analyze and visualize such data using our web-based interactive graph analytics platform. Users can in real-time analyze, visualize, compare, and explore data along many different dimensions. The aim of NR is to make it easy to discover key insights into the data extremely fast with little effort while also providing a medium for users to share data, visualizations, and insights. Other key factors that differentiate NR from the current data repositories is the number of graph datasets, their size, and variety. While other data repositories are static, they also lack a means for users to collaboratively discuss a particular dataset, corrections, or challenges with using the data for certain applications. In contrast, NR incorporates many social and collaborative aspects that facilitate scientific research, e.g., users can discuss each graph, post observations, and visualizations.",
"title": ""
},
{
"docid": "4c9313e27c290ccc41f3874108593bf6",
"text": "Very few standards exist for fitting products to people. Footwear is a noteworthy example. This study is an attempt to evaluate the quality of footwear fit using two-dimensional foot outlines. Twenty Hong Kong Chinese students participated in an experiment that involved three pairs of dress shoes and one pair of athletic shoes. The participants' feet were scanned using a commercial laser scanner, and each participant wore and rated the fit of each region of each shoe. The shoe lasts were also scanned and were used to match the foot scans with the last scans. The ANOVA showed significant (p < 0.05) differences among the four pairs of shoes for the overall, fore-foot and rear-foot fit ratings. There were no significant differences among shoes for mid-foot fit rating. These perceived differences were further analysed after matching the 2D outlines of both last and feet. The point-wise dimensional difference between foot and shoe outlines were computed and analysed after normalizing with foot perimeter. The dimensional difference (DD) plots along the foot perimeter showed that fore-foot fit was strongly correlated (R(2) > 0.8) with two of the minimums in the DD-plot while mid-foot fit was strongly correlated (R(2) > 0.9) with the dimensional difference around the arch region and a point on the lateral side of the foot. The DD-plots allow the designer to determine the critical locations that may affect footwear fit in addition to quantifying the nature of misfit so that design changes to shape and material may be possible.",
"title": ""
},
{
"docid": "e2bdc37afbe20e8281aaae302ed4cd7e",
"text": "Some obtained results related to an ongoing project which aims at providing a comprehensive approach for implementation of Internet of Things concept into the military domain are presented. A comprehensive approach to fault diagnosis within the Internet of Military Things was outlined. Particularly a method of fault detection which is based on a network partitioning into clusters was proposed. Also, some solutions proposed for the experimentally constructed network called EFTSN was conducted.",
"title": ""
},
{
"docid": "112931102c7c68e6e1e056f18593dbbc",
"text": "Graphical passwords were proposed as an alternative to overcome the inherent limitations of text-based passwords, inspired by research that shows that the graphical memory of humans is particularly well developed. A graphical password scheme that has been widely adopted is the Android Unlock Pattern, a special case of the Pass-Go scheme with grid size restricted to 3x3 points and restricted stroke count.\n In this paper, we study the security of Android unlock patterns. By performing a large-scale user study, we measure actual user choices of patterns instead of theoretical considerations on password spaces. From this data we construct a model based on Markov chains that enables us to quantify the strength of Android unlock patterns. We found empirically that there is a high bias in the pattern selection process, e.g., the upper left corner and three-point long straight lines are very typical selection strategies. Consequently, the entropy of patterns is rather low, and our results indicate that the security offered by the scheme is less than the security of only three digit randomly-assigned PINs for guessing 20% of all passwords (i.e., we estimate a partial guessing entropy G_0.2 of 9.10 bit).\n Based on these insights, we systematically improve the scheme by finding a small, but still effective change in the pattern layout that makes graphical user logins substantially more secure. By means of another user study, we show that some changes improve the security by more than doubling the space of actually used passwords (i.e., increasing the partial guessing entropy G_0.2 to 10.81 bit).",
"title": ""
},
{
"docid": "ef3598b448179b7a788444193bc77d62",
"text": "The human visual system has the remarkably ability to be able to effortlessly learn novel concepts from only a few examples. Mimicking the same behavior on machine learning vision systems is an interesting and very challenging research problem with many practical advantages on real world vision applications. In this context, the goal of our work is to devise a few-shot visual learning system that during test time it will be able to efficiently learn novel categories from only a few training data while at the same time it will not forget the initial categories on which it was trained (here called base categories). To achieve that goal we propose (a) to extend an object recognition system with an attention based few-shot classification weight generator, and (b) to redesign the classifier of a ConvNet model as the cosine similarity function between feature representations and classification weight vectors. The latter, apart from unifying the recognition of both novel and base categories, it also leads to feature representations that generalize better on \"unseen\" categories. We extensively evaluate our approach on Mini-ImageNet where we manage to improve the prior state-of-the-art on few-shot recognition (i.e., we achieve 56.20% and 73.00% on the 1-shot and 5-shot settings respectively) while at the same time we do not sacrifice any accuracy on the base categories, which is a characteristic that most prior approaches lack. Finally, we apply our approach on the recently introduced few-shot benchmark of Bharath and Girshick [4] where we also achieve state-of-the-art results.",
"title": ""
}
] | scidocsrr |
f8ff4af53146346ade9faab31db52040 | A comparative study of control techniques for three phase PWM rectifier | [
{
"docid": "714641a148e9a5f02bb13d5485203d70",
"text": "The aim of this paper is to present a review of recently used current control techniques for three-phase voltagesource pulsewidth modulated converters. Various techniques, different in concept, have been described in two main groups: linear and nonlinear. The first includes proportional integral stationary and synchronous) and state feedback controllers, and predictive techniques with constant switching frequency. The second comprises bang-bang (hysteresis, delta modulation) controllers and predictive controllers with on-line optimization. New trends in the current control—neural networks and fuzzy-logicbased controllers—are discussed, as well. Selected oscillograms accompany the presentation in order to illustrate properties of the described controller groups.",
"title": ""
}
] | [
{
"docid": "08affba6a0b34574e9532bb75b79c74f",
"text": "In general, the position control of electro-hydraulic actuator (EHA) systems is difficult because of system uncertainties such as Coulomb friction, viscous friction, and pump leakage coefficient. Even if the exact values of the friction and pump leakage coefficient may be obtained through experiment, the identification procedure is very complicated and requires much effort. In addition, the identified values may not guarantee the reliability of systems because of the variation of the operating condition. Therefore, in this paper, an adaptive backstepping control (ABSC) scheme is proposed to overcome the problem of system uncertainties effectively and to improve the tracking performance of EHA systems. In order to implement the proposed control scheme, the system uncertainties in EHA systems are considered as only one term. In addition, in order to obtain the virtual controls for stabilizing the closed-loop system, the update rule for the system uncertainty term is induced by the Lyapunov control function (LCF). To verify the performance and robustness of the proposed control system, computer simulation of the proposed control system is executed first and the proposed control scheme is implemented for an EHA system by experiment. From the computer simulation and experimental results, it was found that the ABSC system produces the desired tracking performance and has robustness to the system uncertainties of EHA systems.",
"title": ""
},
{
"docid": "e9e11d96e26708c380362847094113db",
"text": "Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed.",
"title": ""
},
{
"docid": "97dfc67c63e7e162dd06d5cb2959912a",
"text": "To examine the pattern of injuries in cases of fatal shark attack in South Australian waters, the authors examined the files of their institution for all cases of shark attack in which full autopsies had been performed over the past 25 years, from 1974 to 1998. Of the seven deaths attributed to shark attack during this period, full autopsies were performed in only two cases. In the remaining five cases, bodies either had not been found or were incomplete. Case 1 was a 27-year-old male surfer who had been attacked by a shark. At autopsy, the main areas of injury involved the right thigh, which displayed characteristic teeth marks, extensive soft tissue damage, and incision of the femoral artery. There were also incised wounds of the right wrist. Bony injury was minimal, and no shark teeth were recovered. Case 2 was a 26-year-old male diver who had been attacked by a shark. At autopsy, the main areas of injury involved the left thigh and lower leg, which displayed characteristic teeth marks, extensive soft tissue damage, and incised wounds of the femoral artery and vein. There was also soft tissue trauma to the left wrist, with transection of the radial artery and vein. Bony injury was minimal, and no shark teeth were recovered. In both cases, death resulted from exsanguination following a similar pattern of soft tissue and vascular damage to a leg and arm. This type of injury is in keeping with predator attack from underneath or behind, with the most severe injuries involving one leg. Less severe injuries to the arms may have occurred during the ensuing struggle. Reconstruction of the damaged limb in case 2 by sewing together skin, soft tissue, and muscle bundles not only revealed that no soft tissue was missing but also gave a clearer picture of the pattern of teeth marks, direction of the attack, and species of predator.",
"title": ""
},
{
"docid": "d12d475dc72f695d3aecfb016229da19",
"text": "Following the increasing popularity of the mobile ecosystem, cybercriminals have increasingly targeted mobile ecosystems, designing and distributing malicious apps that steal information or cause harm to the device's owner. Aiming to counter them, detection techniques based on either static or dynamic analysis that model Android malware, have been proposed. While the pros and cons of these analysis techniques are known, they are usually compared in the context of their limitations e.g., static analysis is not able to capture runtime behaviors, full code coverage is usually not achieved during dynamic analysis, etc. Whereas, in this paper, we analyze the performance of static and dynamic analysis methods in the detection of Android malware and attempt to compare them in terms of their detection performance, using the same modeling approach.To this end, we build on MAMADROID, a state-of-the-art detection system that relies on static analysis to create a behavioral model from the sequences of abstracted API calls. Then, aiming to apply the same technique in a dynamic analysis setting, we modify CHIMP, a platform recently proposed to crowdsource human inputs for app testing, in order to extract API calls' sequences from the traces produced while executing the app on a CHIMP virtual device. We call this system AUNTIEDROID and instantiate it by using both automated (Monkey) and usergenerated inputs. We find that combining both static and dynamic analysis yields the best performance, with $F -$measure reaching 0.92. We also show that static analysis is at least as effective as dynamic analysis, depending on how apps are stimulated during execution, and investigate the reasons for inconsistent misclassifications across methods.",
"title": ""
},
{
"docid": "c906d026937ebea3525f5dee5d923335",
"text": "VGGNets have turned out to be effective for object recognition in still images. However, it is unable to yield good performance by directly adapting the VGGNet models trained on the ImageNet dataset for scene recognition. This report describes our implementation of training the VGGNets on the large-scale Places205 dataset. Specifically, we train three VGGNet models, namely VGGNet-11, VGGNet-13, and VGGNet-16, by using a Multi-GPU extension of Caffe toolbox with high computational efficiency. We verify the performance of trained Places205-VGGNet models on three datasets: MIT67, SUN397, and Places205. Our trained models achieve the state-of-the-art performance o n these datasets and are made public available 1.",
"title": ""
},
{
"docid": "71c94681f64ad6b697a9370691db9e9e",
"text": "The construction of a depression rating scale designed to be particularly sensitive to treatment effects is described. Ratings of 54 English and 52 Swedish patients on a 65 item comprehensive psychopathology scale were used to identify the 17 most commonly occurring symptoms in primary depressive illness in the combined sample. Ratings on these 17 items for 64 patients participating in studies of four different antidepressant drugs were used to create a depression scale consisting of the 10 items which showed the largest changes with treatment and the highest correlation to overall change. The inner-rater reliability of the new depression scale was high. Scores on the scale correlated significantly with scores on a standard rating scale for depression, the Hamilton Rating Scale (HRS), indicating its validity as a general severity estimate. Its capacity to differentiate between responders and non-responders to antidepressant treatment was better than the HRS, indicating greater sensitivity to change. The practical and ethical implications in terms of smaller sample sizes in clinical trials are discussed.",
"title": ""
},
{
"docid": "d6039a3f998b33c08b07696dfb1c2ca9",
"text": "In this paper, we propose a platform surveillance monitoring system using image processing technology for passenger safety in railway station. The proposed system monitors almost entire length of the track line in the platform by using multiple cameras, and determines in real-time whether a human or dangerous obstacle is in the preset monitoring area by using image processing technology. According to the experimental results, we verity system performance in real condition. Detection of train state and object is conducted robustly by using proposed image processing algorithm. Moreover, to deal with the accident immediately, the system provides local station, central control room and train with the video information and alarm message.",
"title": ""
},
{
"docid": "dc6fe019c28ed63f435f295534f944a1",
"text": "Research on integrated neural-symbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to strive for applicable implementations and use cases. Recent work has covered a great variety of logics used in artificial intelligence and provides a multitude of techniques for dealing with them within the context of artificial neural networks. Already in the pioneering days of computational models of neural cognition, the question was raised how symbolic knowledge can be represented and dealt with within neural networks. The landmark paper [McCulloch and Pitts, 1943] provides fundamental insights how propositional logic can be processed using simple artificial neural networks. Within the following decades, however, the topic did not receive much attention as research in artificial intelligence initially focused on purely symbolic approaches. The power of machine learning using artificial neural networking was not recognized until the 80s, when in particular the backpropagation algorithm [Rumelhart et al., 1986] made connectionist learning feasible and applicable in practice. These advances indicated a breakthrough in machine learning which quickly led to industrial-strength applications in areas such as image analysis, speech and pattern recognition, investment analysis, engine monitoring, fault diagnosis, etc. During a training process from raw data, artificial neural networks acquire expert knowledge about the problem domain, and the ability to generalize this knowledge to similar but previously unencountered situations in a way which often surpasses the abilities of human experts. The knowledge obtained during the training process, however, is hidden within",
"title": ""
},
{
"docid": "6286480f676c75e1cac4af9329227258",
"text": "Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel object and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a modelbased route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability and related quantities from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way— bypassing the need for an explicit simulation. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. The evaluation is carried out on synthetic data and compared to human judgments on the same stimuli.",
"title": ""
},
{
"docid": "0e2fdb9fc054e47a3f0b817f68de68b1",
"text": "Recent regulatory guidance suggests that drug metabolites identified in human plasma should be present at equal or greater levels in at least one of the animal species used in safety assessments (MIST). Often synthetic standards for the metabolites do not exist, thus this has introduced multiple challenges regarding the quantitative comparison of metabolites between human and animals. Various bioanalytical approaches are described to evaluate the exposure of metabolites in animal vs. human. A simple LC/MS/MS peak area ratio comparison approach is the most facile and applicable approach to make a first assessment of whether metabolite exposures in animals exceed that in humans. In most cases, this measurement is sufficient to demonstrate that an animal toxicology study of the parent drug has covered the safety of the human metabolites. Methods whereby quantitation of metabolites can be done in the absence of chemically synthesized authentic standards are also described. Only in rare cases, where an actual exposure measurement of a metabolite is needed, will a validated or qualified method requiring a synthetic standard be needed. The rigor of the bioanalysis is increased accordingly based on the results of animal:human ratio measurements. This data driven bioanalysis strategy to address MIST issues within standard drug development processes is described.",
"title": ""
},
{
"docid": "035696f6f2e79cb226c6bc45991cbb5a",
"text": "The vast amount of research over the past decades has significantly added to our knowledge of phantom limb pain. Multiple factors including site of amputation or presence of preamputation pain have been found to have a positive correlation with the development of phantom limb pain. The paradigms of proposed mechanisms have shifted over the past years from the psychogenic theory to peripheral and central neural changes involving cortical reorganization. More recently, the role of mirror neurons in the brain has been proposed in the generation of phantom pain. A wide variety of treatment approaches have been employed, but mechanism-based specific treatment guidelines are yet to evolve. Phantom limb pain is considered a neuropathic pain, and most treatment recommendations are based on recommendations for neuropathic pain syndromes. Mirror therapy, a relatively recently proposed therapy for phantom limb pain, has mixed results in randomized controlled trials. Most successful treatment outcomes include multidisciplinary measures. This paper attempts to review and summarize recent research relative to the proposed mechanisms of and treatments for phantom limb pain.",
"title": ""
},
{
"docid": "3a18976245cfc4b50e97aadf304ef913",
"text": "Key-Value Stores (KVS) are becoming increasingly popular because they scale up and down elastically, sustain high throughputs for get/put workloads and have low latencies. KVS owe these advantages to their simplicity. This simplicity, however, comes at a cost: It is expensive to process complex, analytical queries on top of a KVS because today’s generation of KVS does not support an efficient way to scan the data. The problem is that there are conflicting goals when designing a KVS for analytical queries and for simple get/put workloads: Analytical queries require high locality and a compact representation of data whereas elastic get/put workloads require sparse indexes. This paper shows that it is possible to have it all, with reasonable compromises. We studied the KVS design space and built TellStore, a distributed KVS, that performs almost as well as state-of-the-art KVS for get/put workloads and orders of magnitude better for analytical and mixed workloads. This paper presents the results of comprehensive experiments with an extended version of the YCSB benchmark and a workload from the telecommunication industry.",
"title": ""
},
{
"docid": "2e8e9401e76bfdb2b121fbc7da29b2c1",
"text": "BACKGROUND\nMagnetic resonance (MR) imaging has established its usefulness in diagnosing hamstring muscle strain and identifying features correlating with the duration of rehabilitation in athletes; however, data are currently lacking that may predict which imaging parameters may be predictive of a repeat strain.\n\n\nPURPOSE\nThis study was conducted to identify whether any MR imaging-identifiable parameters are predictive of athletes at risk of sustaining a recurrent hamstring strain in the same playing season.\n\n\nSTUDY DESIGN\nCohort study; Level of evidence, 3.\n\n\nMETHODS\nForty-one players of the Australian Football League who sustained a hamstring injury underwent MR examination within 3 days of injury between February and August 2002. The imaging parameters measured were the length of injury, cross-sectional area, the specific muscle involved, and the location of the injury within the muscle-tendon unit. Players who suffered a repeat injury during the same season were reimaged, and baseline and repeat injury measurements were compared. Comparison was also made between this group and those who sustained a single strain.\n\n\nRESULTS\nForty-one players sustained hamstring strains that were positive on MR imaging, with 31 injured once and 10 suffering a second injury. The mean length of hamstring muscle injury for the isolated group was 83.4 mm, compared with 98.7 mm for the reinjury group (P = .35). In the reinjury group, the second strain was also of greater length than the original (mean, 107.5 mm; P = .07). Ninety percent of players sustaining a repeat injury demonstrated an injury length greater than 60 mm, compared with only 58% in the single strain group (P = .01). Only 7% of players (1 of 14) with a strain <60 mm suffered a repeat injury. Of the 27 players sustaining a hamstring strain >60 mm, 33% (9 of 27) suffered a repeat injury. Of all the parameters assessed, only a history of anterior cruciate ligament sprain was a statistically significant predictor for suffering a second strain during the same season of competition.\n\n\nCONCLUSION\nA history of anterior cruciate ligament injury was the only statistically significant risk factor for a recurrent hamstring strain in our study. Of the imaging parameters, the MR length of a strain had the strongest correlation association with a repeat hamstring strain and therefore may assist in identifying which athletes are more likely to suffer further reinjury.",
"title": ""
},
{
"docid": "f9e273248ed6e73766f1fc5ba1ecdfda",
"text": "Rapid, vertically climbing cockroaches produced climbing dynamics similar to geckos, despite differences in attachment mechanism, ;foot or toe' morphology and leg number. Given the common pattern in such diverse species, we propose the first template for the dynamics of rapid, legged climbing analogous to the spring-loaded, inverted pendulum used to characterize level running in a diversity of pedestrians. We measured single leg wall reaction forces and center of mass dynamics in death-head cockroaches Blaberus discoidalis, as they ascended a three-axis force plate oriented vertically and coated with glass beads to aid attachment. Cockroaches used an alternating tripod gait during climbs at 19.5+/-4.2 cm s(-1), approximately 5 body lengths s(-1). Single-leg force patterns differed significantly from level running. During vertical climbing, all legs generated forces to pull the animal up the plate. Front and middle legs pulled laterally toward the midline. Front legs pulled the head toward the wall, while hind legs pushed the abdomen away. These single-leg force patterns summed to generate dynamics of the whole animal in the frontal plane such that the center of mass cyclically accelerated up the wall in synchrony with cyclical side-to-side motion that resulted from alternating net lateral pulling forces. The general force patterns used by cockroaches and geckos have provided biological inspiration for the design of a climbing robot named RiSE (Robots in Scansorial Environments).",
"title": ""
},
{
"docid": "2f9e5a34137fe7871c9388078c57dc8e",
"text": "This paper presents a new model of measuring semantic similarity in the taxonomy of WordNet. The model takes the path length between two concepts and IC value of each concept as its metric, furthermore, the weight of two metrics can be adapted artificially. In order to evaluate our model, traditional and widely used datasets are used. Firstly, coefficients of correlation between human ratings of similarity and six computational models are calculated, the result shows our new model outperforms their homologues. Then, the distribution graphs of similarity value of 65 word pairs are discussed our model having no faulted zone more centralized than other five methods. So our model can make up the insufficient of other methods which only using one metric(path length or IC value) in their model.",
"title": ""
},
{
"docid": "d9605c1cde4c40d69c2faaea15eb466c",
"text": "A magnetically tunable ferrite-loaded substrate integrated waveguide (SIW) cavity resonator is presented and demonstrated. X-band cavity resonator is operated in the dominant mode and the ferrite slabs are loaded onto the side walls of the cavity where the value of magnetic field is highest. Measured results for single and double ferrite-loaded SIW cavity resonators are presented. Frequency tuning range of more than 6% and 10% for single and double ferrite slabs are obtained. Unloaded Q -factor of more than 200 is achieved.",
"title": ""
},
{
"docid": "7afe4444a805f1994a40f98e01908509",
"text": "It is well known that CMOS scaling trends are now accompanied by less desirable byproducts such as increased energy dissipation. To combat the aforementioned challenges, solutions are sought at both the device and architectural levels. With this context, this work focuses on embedding a low voltage device, a Tunneling Field Effect Transistor (TFET) within a Cellular Neural Network (CNN) -- a low power analog computing architecture. Our study shows that TFET-based CNN systems, aside from being fully functional, also provide significant power savings when compared to the conventional resistor-based CNN. Our initial studies suggest that power savings are possible by carefully engineering lower voltage, lower current TFET devices without sacrificing performance. Moreover, TFET-based CNN reduces implementation footprints by eliminating the hardware required to realize output transfer functions. Application dynamics are verified through simulations. We conclude the paper with a discussion of desired device characteristics for CNN architectures with enhanced functionality.",
"title": ""
},
{
"docid": "f90e6d3084733994935fcbee64286aec",
"text": "To find the position of an acoustic source in a room, typically, a set of relative delays among different microphone pairs needs to be determined. The generalized cross-correlation (GCC) method is the most popular to do so and is well explained in a landmark paper by Knapp and Carter. In this paper, the idea of cross-correlation coefficient between two random signals is generalized to the multichannel case by using the notion of spatial prediction. The multichannel spatial correlation matrix is then deduced and its properties are discussed. We then propose a new method based on the multichannel spatial correlation matrix for time delay estimation. It is shown that this new approach can take advantage of the redundancy when more than two microphones are available and this redundancy can help the estimator to better cope with noise and reverberation.",
"title": ""
},
{
"docid": "437457e673df18fc69d57c2c16a992fc",
"text": "Human-associated microbial communities vary across individuals: possible contributing factors include (genetic) relatedness, diet, and age. However, our surroundings, including individuals with whom we interact, also likely shape our microbial communities. To quantify this microbial exchange, we surveyed fecal, oral, and skin microbiota from 60 families (spousal units with children, dogs, both, or neither). Household members, particularly couples, shared more of their microbiota than individuals from different households, with stronger effects of co-habitation on skin than oral or fecal microbiota. Dog ownership significantly increased the shared skin microbiota in cohabiting adults, and dog-owning adults shared more 'skin' microbiota with their own dogs than with other dogs. Although the degree to which these shared microbes have a true niche on the human body, vs transient detection after direct contact, is unknown, these results suggest that direct and frequent contact with our cohabitants may significantly shape the composition of our microbial communities. DOI:http://dx.doi.org/10.7554/eLife.00458.001.",
"title": ""
},
{
"docid": "d7582552589626891258f52b0d750915",
"text": "Social Live Stream Services (SLSS) exploit a new level of social interaction. One of the main challenges in these services is how to detect and prevent deviant behaviors that violate community guidelines. In this work, we focus on adult content production and consumption in two widely used SLSS, namely Live.me and Loops Live, which have millions of users producing massive amounts of video content on a daily basis. We use a pre-trained deep learning model to identify broadcasters of adult content. Our results indicate that moderation systems in place are highly ineffective in suspending the accounts of such users. We create two large datasets by crawling the social graphs of these platforms, which we analyze to identify characterizing traits of adult content producers and consumers, and discover interesting patterns of relationships among them, evident in both networks.",
"title": ""
}
] | scidocsrr |
94e2c515da44e97d8b7db8821ebcb2e4 | Two systems for empathy: a double dissociation between emotional and cognitive empathy in inferior frontal gyrus versus ventromedial prefrontal lesions. | [
{
"docid": "ad2655aaed8a4f3379cb206c6e405f16",
"text": "Lesions of the orbital frontal lobe, particularly its medial sectors, are known to cause deficits in empathic ability, whereas the role of this region in theory of mind processing is the subject of some controversy. In a functional magnetic resonance imaging study with healthy participants, emotional perspective-taking was contrasted with cognitive perspective-taking in order to examine the role of the orbital frontal lobe in subcomponents of theory of mind processing. Subjects responded to a series of scenarios presented visually in three conditions: emotional perspective-taking, cognitive perspective-taking and a control condition that required inferential reasoning, but not perspective-taking. Group results demonstrated that the medial orbitofrontal lobe, defined as Brodmann's areas 11 and 25, was preferentially involved in emotional as compared to cognitive perspective-taking. This finding is both consistent with the lesion literature, and resolves the inconsistency of orbital frontal findings in the theory of mind literature.",
"title": ""
},
{
"docid": "6a4437fa8a5a764d99ed5471401f5ce4",
"text": "There is disagreement in the literature about the exact nature of the phenomenon of empathy. There are emotional, cognitive, and conditioning views, applying in varying degrees across species. An adequate description of the ultimate and proximate mechanism can integrate these views. Proximately, the perception of an object's state activates the subject's corresponding representations, which in turn activate somatic and autonomic responses. This mechanism supports basic behaviors (e.g., alarm, social facilitation, vicariousness of emotions, mother-infant responsiveness, and the modeling of competitors and predators) that are crucial for the reproductive success of animals living in groups. The Perception-Action Model (PAM), together with an understanding of how representations change with experience, can explain the major empirical effects in the literature (similarity, familiarity, past experience, explicit teaching, and salience). It can also predict a variety of empathy disorders. The interaction between the PAM and prefrontal functioning can also explain different levels of empathy across species and age groups. This view can advance our evolutionary understanding of empathy beyond inclusive fitness and reciprocal altruism and can explain different levels of empathy across individuals, species, stages of development, and situations.",
"title": ""
}
] | [
{
"docid": "a338df86cf504d246000c42512473f93",
"text": "Natural Language Processing (NLP) has emerged with a wide scope of research in the area. The Burmese language, also called the Myanmar Language is a resource scarce, tonal, analytical, syllable-timed and principally monosyllabic language with Subject-Object-Verb (SOV) ordering. NLP of Burmese language is also challenged by the fact that it has no white spaces and word boundaries. Keeping these facts in view, the current paper is a first formal attempt to present a bibliography of research works pertinent to NLP tasks in Burmese language. Instead of presenting mere catalogue, the current work is also specifically elaborated by annotations as well as classifications of NLP task research works in NLP related categories. The paper presents the state-of-the-art of Burmese NLP tasks. Both annotations and classifications of NLP tasks of Burmese language are useful to the scientific community as it shows where the field of research in Burmese NLP is going. In fact, to the best of author’s knowledge, this is first work of its kind worldwide for any language. For a period spanning more than 25 years, the paper discusses Burmese language Word Identification, Segmentation, Disambiguation, Collation, Semantic Parsing and Tokenization followed by Part-Of-Speech (POS) Tagging, Machine Translation Systems (MTS), Text Keying/Input, Recognition and Text Display Methods. Burmese language WordNet, Search Engine and influence of other languages on Burmese language are also discussed.",
"title": ""
},
{
"docid": "3ea6de664a7ac43a1602b03b46790f0a",
"text": "After reviewing the design of a class of lowpass recursive digital filters having integer multiplier and linear phase characteristics, the possibilities for extending the class to include high pass, bandpass, and bandstop (‘notch’) filters are described. Experience with a PDP 11 computer has shown that these filters may be programmed simply using machine code, and that online operation at sampling rates up to about 8 kHz is possible. The practical application of such filters is illustrated by using a notch desgin to remove mains-frequency interference from an e.c.g. waveform. Après avoir passé en revue la conception d'un type de filtres digitaux récurrents passe-bas à multiplicateurs incorporés et à caractéristiques de phase linéaires, cet article décrit les possibilités d'extension de ce type aux filtres, passe-haut, passe-bande et à élimination de bande. Une expérience menée avec un ordinateur PDP 11 a indiqué que ces filtres peuvent être programmés de manière simple avec un code machine, et qu'il est possible d'effectuer des opérations en ligne avec des taux d'échantillonnage jusqu'à environ 8 kHz. L'application pratique de tels filtres est illustrée par un exemple dans lequel un filtre à élimination de bande est utilisé pour éliminer les interférences due à la fréquence du courant d'alimentation dans un tracé d'e.c.g. Nach einer Untersuchung der Konstruktion einer Gruppe von Rekursivdigitalfiltern mit niedrigem Durchlässigkeitsbereich und mit ganzzahligen Multipliziereinrichtungen und Linearphaseneigenschaften werden die Möglichkeiten beschrieben, die Gruppe so zu erweitern, daß sie Hochfilter, Bandpaßfilter und Bandstopfilter (“Kerbfilter”) einschließt. Erfahrungen mit einem PDP 11-Computer haben gezeigt, daß diese Filter auf einfache Weise unter Verwendung von Maschinenkode programmiert werden können und daß On-Line-Betrieb bei Entnahmegeschwindigkeiten von bis zu 8 kHz möglich ist. Die praktische Anwendung solcher Filter wird durch Verwendung einer Kerbkonstruktion zur Ausscheidung von Netzfrequenzstörungen von einer ECG-Wellenform illustriert.",
"title": ""
},
{
"docid": "6a6238bb56eacc7d8ecc8f15f753b745",
"text": "Privacy-preservation has emerged to be a major concern in devising a data mining system. But, protecting the privacy of data mining input does not guarantee a privacy-preserved output. This paper focuses on preserving the privacy of data mining output and particularly the output of classification task. Further, instead of static datasets, we consider the classification of continuously arriving data streams: a rapidly growing research area. Due to the challenges of data stream classification such as vast volume, a mixture of labeled and unlabeled instances throughout the stream and timely classifier publication, enforcing privacy-preservation techniques becomes even more challenging. In order to achieve this goal, we propose a systematic method for preserving output-privacy in data stream classification that addresses several applications like loan approval, credit card fraud detection, disease outbreak or biological attack detection. Specifically, we propose an algorithm named Diverse and k-Anonymized HOeffding Tree (DAHOT) that is an amalgamation of popular data stream classification algorithm Hoeffding tree and a variant of k-anonymity and l-diversity principles. The empirical results on real and synthetic data streams verify the effectiveness of DAHOT as compared to its bedrock Hoeffding tree and two other techniques, one that learns sanitized decision trees from sampled data stream and other technique that uses ensemble-based classification. DAHOT guarantees to preserve the private patterns while classifying the data streams accurately.",
"title": ""
},
{
"docid": "d7aac1208aa2ef63ed9a4ef5b67d8017",
"text": "We contrast two theoretical approaches to social influence, one stressing interpersonal dependence, conceptualized as normative and informational influence (Deutsch & Gerard, 1955), and the other stressing group membership, conceptualized as self-categorization and referent informational influence (Turner, Hogg, Oakes, Reicher & Wetherell, 1987). We argue that both social comparisons to reduce uncertainty and the existence of normative pressure to comply depend on perceiving the source of influence as belonging to one's own category. This study tested these two approaches using three influence paradigms. First we demonstrate that, in Sherif's (1936) autokinetic effect paradigm, the impact of confederates on the formation of a norm decreases as their membership of a different category is made more salient to subjects. Second, in the Asch (1956) conformity paradigm, surveillance effectively exerts normative pressure if done by an in-group but not by an out-group. In-group influence decreases and out-group influence increases when subjects respond privately. Self-report data indicate that in-group confederates create more subjective uncertainty than out-group confederates and public responding seems to increase cohesiveness with in-group - but decrease it with out-group - sources of influence. In our third experiment we use the group polarization paradigm (e.g. Burnstein & Vinokur, 1973) to demonstrate that, when categorical differences between two subgroups within a discussion group are made salient, convergence of opinion between the subgroups is inhibited. Taken together the experiments show that self-categorization can be a crucial determining factor in social influence.",
"title": ""
},
{
"docid": "efae02feebc4a2efe2cf98ab4d19cd34",
"text": "User behavior on the Web changes over time. For example, the queries that people issue to search engines, and the underlying informational goals behind the queries vary over time. In this paper, we examine how to model and predict this temporal user behavior. We develop a temporal modeling framework adapted from physics and signal processing that can be used to predict time-varying user behavior using smoothing and trends. We also explore other dynamics of Web behaviors, such as the detection of periodicities and surprises. We develop a learning procedure that can be used to construct models of users' activities based on features of current and historical behaviors. The results of experiments indicate that by using our framework to predict user behavior, we can achieve significant improvements in prediction compared to baseline models that weight historical evidence the same for all queries. We also develop a novel learning algorithm that explicitly learns when to apply a given prediction model among a set of such models. Our improved temporal modeling of user behavior can be used to enhance query suggestions, crawling policies, and result ranking.",
"title": ""
},
{
"docid": "9cdc7b6b382ce24362274b75da727183",
"text": "Collaborative spectrum sensing is subject to the attack of malicious secondary user(s), which may send false reports. Therefore, it is necessary to detect potential attacker(s) and then exclude the attacker's report for spectrum sensing. Many existing attacker-detection schemes are based on the knowledge of the attacker's strategy and thus apply the Bayesian attacker detection. However, in practical cognitive radio systems the data fusion center typically does not know the attacker's strategy. To alleviate the problem of the unknown strategy of attacker(s), an abnormality-detection approach, based on the abnormality detection in data mining, is proposed. The performance of the attacker detection in the single-attacker scenario is analyzed explicitly. For the case in which the attacker does not know the reports of honest secondary users (called independent attack), it is shown that the attacker can always be detected as the number of spectrum sensing rounds tends to infinity. For the case in which the attacker knows all the reports of other secondary users, based on which the attacker sends its report (called dependent attack), an approach for the attacker to perfectly avoid being detected is found, provided that the attacker has perfect information about the miss-detection and false-alarm probabilities. This motivates cognitive radio networks to protect the reports of secondary users. The performance of attacker detection in the general case of multiple attackers is demonstrated using numerical simulations.",
"title": ""
},
{
"docid": "6e8d1b5c2183ce09aadb09e4ff215241",
"text": "The widely used ChestX-ray14 dataset addresses an important medical image classification problem and has the following caveats: 1) many lung pathologies are visually similar, 2) a variant of diseases including lung cancer, tuberculosis, and pneumonia are present in a single scan, i.e. multiple labels and 3) The incidence of healthy images is much larger than diseased samples, creating imbalanced data. These properties are common in medical domain. Existing literature uses stateof-the-art DensetNet/Resnet models being transfer learned where output neurons of the networks are trained for individual diseases to cater for multiple diseases labels in each image. However, most of them don’t consider relationship between multiple classes. In this work we have proposed a novel error function, Multi-label Softmax Loss (MSML), to specifically address the properties of multiple labels and imbalanced data. Moreover, we have designed deep network architecture based on fine-grained classification concept that incorporates MSML. We have evaluated our proposed method on various network backbones and showed consistent performance improvements of AUC-ROC scores on the ChestX-ray14 dataset. The proposed error function provides a new method to gain improved performance across wider medical datasets.",
"title": ""
},
{
"docid": "0fd48f6f0f5ef1e68c2a157c16713e86",
"text": "Location distinction is the ability to determine when a device has changed its position. We explore the opportunity to use sophisticated PHY-layer measurements in wireless networking systems for location distinction. We first compare two existing location distinction methods - one based on channel gains of multi-tonal probes, and another on channel impulse response. Next, we combine the benefits of these two methods to develop a new link measurement that we call the complex temporal signature. We use a 2.4 GHz link measurement data set, obtained from CRAWDAD [10], to evaluate the three location distinction methods. We find that the complex temporal signature method performs significantly better compared to the existing methods. We also perform new measurements to understand and model the temporal behavior of link signatures over time. We integrate our model in our location distinction mechanism and significantly reduce the probability of false alarms due to temporal variations of link signatures.",
"title": ""
},
{
"docid": "37dbfc84d3b04b990d8b3b31d2013f77",
"text": "Large projects such as kernels, drivers and libraries follow a code style, and have recurring patterns. In this project, we explore learning based code recommendation, to use the project context and give meaningful suggestions. Using word vectors to model code tokens, and neural network based learning techniques, we are able to capture interesting patterns, and predict code that that cannot be predicted by a simple grammar and syntax based approach as in conventional IDEs. We achieve a total prediction accuracy of 56.0% on Linux kernel, a C project, and 40.6% on Twisted, a Python networking library.",
"title": ""
},
{
"docid": "eb7ccd69c0bbb4e421b8db3b265f5ba6",
"text": "The discovery of Novoselov et al. (2004) of a simple method to transfer a single atomic layer of carbon from the c-face of graphite to a substrate suitable for the measurement of its electrical and optical properties has led to a renewed interest in what was considered to be before that time a prototypical, yet theoretical, two-dimensional system. Indeed, recent theoretical studies of graphene reveal that the linear electronic band dispersion near the Brillouin zone corners gives rise to electrons and holes that propagate as if they were massless fermions and anomalous quantum transport was experimentally observed. Recent calculations and experimental determination of the optical phonons of graphene reveal Kohn anomalies at high-symmetry points in the Brillouin zone. They also show that the Born– Oppenheimer principle breaks down for doped graphene. Since a carbon nanotube can be viewed as a rolled-up sheet of graphene, these recent theoretical and experimental results on graphene should be important to researchers working on carbon nanotubes. The goal of this contribution is to review the exciting news about the electronic and phonon states of graphene and to suggest how these discoveries help understand the properties of carbon nanotubes.",
"title": ""
},
{
"docid": "f7e14c5e8a54e01c3b8f64e08f30a500",
"text": "As a subsystem of an Intelligent Transportation System (ITS), an Advanced Traveller Information System (ATIS) disseminates real-time traffic information to travellers. This paper analyses traffic flows data, describes methodology of traffic flows data processing and visualization in digital ArcGIS online maps. Calculation based on real time traffic data from equipped traffic sensors in Vilnius city streets. The paper also discusses about traffic conditions and impacts for Vilnius streets network from the point of traffic flows view. Furthermore, a comprehensive traffic flow GIS modelling procedure is presented, which relates traffic flows data from sensors to street network segments and updates traffic flow data to GIS database. GIS maps examples and traffic flows analysis possibilities in this paper presented as well.",
"title": ""
},
{
"docid": "a1bb09726327d73cf73c1aa9b0a2c39d",
"text": "Advances in neural network language models have demonstrated that these models can effectively learn representations of words meaning. In this paper, we explore a variation of neural language models that can learn on concepts taken from structured ontologies and extracted from free-text, rather than directly from terms in free-text.\n This model is employed for the task of measuring semantic similarity between medical concepts, a task that is central to a number of techniques in medical informatics and information retrieval. The model is built with two medical corpora (journal abstracts and patient records) and empirically validated on two ground-truth datasets of human-judged concept pairs assessed by medical professionals. Empirically, our approach correlates closely with expert human assessors (≈0.9) and outperforms a number of state-of-the-art benchmarks for medical semantic similarity.\n The demonstrated superiority of this model for providing an effective semantic similarity measure is promising in that this may translate into effectiveness gains for techniques in medical information retrieval and medical informatics (e.g., query expansion and literature-based discovery).",
"title": ""
},
{
"docid": "c1978e4936ed5bda4e51863dea7e93ee",
"text": "In needle-based medical procedures, beveled-tip flexible needles are steered inside soft tissue with the aim of reaching pre-defined target locations. The efficiency of needle-based interventions depends on accurate control of the needle tip. This paper presents a comprehensive mechanics-based model for simulation of planar needle insertion in soft tissue. The proposed model for needle deflection is based on beam theory, works in real-time, and accepts the insertion velocity as an input that can later be used as a control command for needle steering. The model takes into account the effects of tissue deformation, needle-tissue friction, tissue cutting force, and needle bevel angle on needle deflection. Using a robot that inserts a flexible needle into a phantom tissue, various experiments are conducted to separately identify different subsets of the model parameters. The validity of the proposed model is verified by comparing the simulation results to the empirical data. The results demonstrate the accuracy of the proposed model in predicting the needle tip deflection for different insertion velocities.",
"title": ""
},
{
"docid": "0245101fac73b247fb2750413aad3915",
"text": "State evaluation and opponent modelling are important areas to consider when designing game-playing Artificial Intelligence. This paper presents a model for predicting which player will win in the real-time strategy game StarCraft. Model weights are learned from replays using logistic regression. We also present some metrics for estimating player skill which can be used a features in the predictive model, including using a battle simulation as a baseline to compare player performance against.",
"title": ""
},
{
"docid": "ba39b85859548caa2d3f1d51a7763482",
"text": "A new antenna structure of internal LTE/WWAN laptop computer antenna formed by a coupled-fed loop antenna connected with two branch radiators is presented. The two branch radiators consist of one longer strip and one shorter strip, both contributing multi-resonant modes to enhance the bandwidth of the antenna. The antenna's lower band is formed by a dual-resonant mode mainly contributed by the longer branch strip, while the upper band is formed by three resonant modes contributed respectively by one higher-order resonant mode of the longer branch strip, one resonant mode of the coupled-fed loop antenna alone, and one resonant mode of the shorter branch strip. The antenna's lower and upper bands can therefore cover the desired 698~960 and 1710~2690 MHz bands, respectively. The proposed antenna is suitable to be mounted at the top shielding metal wall of the display ground of the laptop computer and occupies a small volume of 4 × 10 × 75 mm3 above the top shielding metal wall, which makes it promising to be embedded inside the casing of the laptop computer as an internal antenna.",
"title": ""
},
{
"docid": "dca65464cc8a3bb59f2544ef9a09e388",
"text": "Some authors clearly showed that faking reduces the construct validity of personality questionnaires, whilst many others found no such effect. A possible explanation for mixed results could be searched for in a variety of methodological strategies in forming comparison groups supposed to differ in the level of faking: candidates vs. non-candidates; groups of individuals with \"high\" vs. \"low\" social desirability score; and groups given instructions to respond honestly vs. instructions to \"fake good\". All three strategies may be criticized for addressing the faking problem indirectly – assuming that comparison groups really differ in the level of response distortion, which might not be true. Therefore, in a within-subject design study we examined how faking affects the construct validity of personality inventories using a direct measure of faking. The results suggest that faking reduces the construct validity of personality questionnaires gradually – the effect was stronger in the subsample of participants who distorted their responses to a greater extent.",
"title": ""
},
{
"docid": "4cf669d93a62c480f4f6795f47744bc8",
"text": "We present an estimate of an upper bound of 1.75 bits for the entropy of characters in printed English, obtained by constructing a word trigram model and then computing the cross-entropy between this model and a balanced sample of English text. We suggest the well-known and widely available Brown Corpus of printed English as a standard against which to measure progress in language modeling and offer our bound as the first of what we hope will be a series of steadily decreasing bounds.",
"title": ""
},
{
"docid": "b70d795f7f1bdbc18be034e1d3f20f8e",
"text": "Technical universities, especially in Europe, are facing an important challenge in attracting more diverse groups of students, and in keeping the students they attract motivated and engaged in the curriculum. We describe our experience with gamification, which we loosely define as a teaching technique that uses social gaming elements to deliver higher education. Over the past three years, we have applied gamification to undergraduate and graduate courses in a leading technical university in the Netherlands and in Europe. Ours is one of the first long-running attempts to show that gamification can be used to teach technically challenging courses. The two gamification-based courses, the first-year B.Sc. course Computer Organization and an M.Sc.-level course on the emerging technology of Cloud Computing, have been cumulatively followed by over 450 students and passed by over 75% of them, at the first attempt. We find that gamification is correlated with an increase in the percentage of passing students, and in the participation in voluntary activities and challenging assignments. Gamification seems to also foster interaction in the classroom and trigger students to pay more attention to the design of the course. We also observe very positive student assessments and volunteered testimonials, and a Teacher of the Year award.",
"title": ""
},
{
"docid": "4040c04a9a3cfebe850229cc78233f8c",
"text": "Utility computing delivers compute and storage resources to applications as an 'on-demand utility', much like electricity, from a distributed collection of computing resources. There is great interest in running database applications on utility resources (e.g., Oracle's Grid initiative) due to reduced infrastructure and management costs, higher resource utilization, and the ability to handle sudden load surges. Virtual Machine (VM) technology offers powerful mechanisms to manage a utility resource infrastructure. However, provisioning VMs for applications to meet system performance goals, e.g., to meet service level agreements (SLAs), is an open problem. We are building two systems at Duke - Shirako and NIMO - that collectively address this problem.\n Shirako is a toolkit for leasing VMs to an application from a utility resource infrastructure. NIMO learns application performance models using novel techniques based on active learning, and uses these models to guide VM provisioning in Shirako. We will demonstrate: (a) how NIMO learns performance models in an online and automatic fashion using active learning; and (b) how NIMO uses these models to do automated and on-demand provisioning of VMs in Shirako for two classes of database applications - multi-tier web services and computational science workflows.",
"title": ""
},
{
"docid": "7809fdedaf075955523b51b429638501",
"text": "PM10 prediction has attracted special legislative and scientific attention due to its harmful effects on human health. Statistical techniques have the potential for high-accuracy PM10 prediction and accordingly, previous studies on statistical methods for temporal, spatial and spatio-temporal prediction of PM10 are reviewed and discussed in this paper. A review of previous studies demonstrates that Support Vector Machines, Artificial Neural Networks and hybrid techniques show promise for suitable temporal PM10 prediction. A review of the spatial predictions of PM10 shows that the LUR (Land Use Regression) approach has been successfully utilized for spatial prediction of PM10 in urban areas. Of the six introduced approaches for spatio-temporal prediction of PM10, only one approach is suitable for high-resolved prediction (Spatial resolution < 100 m; Temporal resolution ď 24 h). In this approach, based upon the LUR modeling method, short-term dynamic input variables are employed as explanatory variables alongside typical non-dynamic input variables in a non-linear modeling procedure.",
"title": ""
}
] | scidocsrr |
bfc22d978100eb5b81880d8850ca33a6 | An optical neural interface: in vivo control of rodent motor cortex with integrated fiberoptic and optogenetic technology. | [
{
"docid": "4287db8deb3c4de5d7f2f5695c3e2e70",
"text": "The brain is complex and dynamic. The spatial scales of interest to the neurobiologist range from individual synapses (approximately 1 microm) to neural circuits (centimeters); the timescales range from the flickering of channels (less than a millisecond) to long-term memory (years). Remarkably, fluorescence microscopy has the potential to revolutionize research on all of these spatial and temporal scales. Two-photon excitation (2PE) laser scanning microscopy allows high-resolution and high-sensitivity fluorescence microscopy in intact neural tissue, which is hostile to traditional forms of microscopy. Over the last 10 years, applications of 2PE, including microscopy and photostimulation, have contributed to our understanding of a broad array of neurobiological phenomena, including the dynamics of single channels in individual synapses and the functional organization of cortical maps. Here we review the principles of 2PE microscopy, highlight recent applications, discuss its limitations, and point to areas for future research and development.",
"title": ""
}
] | [
{
"docid": "bfcb1fd882a328daab503a7dd6b6d0a6",
"text": "The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability — via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several non-trivial examples.",
"title": ""
},
{
"docid": "c8dae180aae646bf00e202bd24f15f59",
"text": "Massively Multiplayer Online Games (MMOGs) continue to be a popular and lucrative sector of the gaming market. Project Massive was created to assess MMOG players' social experiences both inside and outside of their gaming environments and the impact of these activities on their everyday lives. The focus of Project Massive has been on the persistent player groups or \"guilds\" that form in MMOGs. The survey has been completed online by 1836 players, who reported on their play patterns, commitment to their player organizations, and personality traits like sociability, extraversion and depression. Here we report our cross-sectional findings and describe our future longitudinal work as we track players and their guilds across the evolving landscape of the MMOG product space.",
"title": ""
},
{
"docid": "f613a2ed6f64c469cf1180d1e8fe9e4a",
"text": "We describe an estimation technique which, given a measurement of the depth of a target from a wide-fieldof-view (WFOV) stereo camera pair, produces a minimax risk fixed-size confidence interval estimate for the target depth. This work constitutes the first application to the computer vision domain of optimal fixed-size confidenceinterval decision theory. The approach is evaluated in terms of theoretical capture probability and empirical cap ture frequency during actual experiments with a target on an optical bench. The method is compared to several other procedures including the Kalman Filter. The minimax approach is found to dominate all the other methods in performance. In particular, for the minimax approach, a very close agreement is achieved between theoreticalcapture probability andempiricalcapture frequency. This allows performance to be accurately predicted, greatly facilitating the system design, and delineating the tasks that may be performed with a given system.",
"title": ""
},
{
"docid": "e69acc779b3bd736c0e5bd6962c8d459",
"text": "The genome-wide transcriptome profiling of cancerous and normal tissue samples can provide insights into the molecular mechanisms of cancer initiation and progression. RNA Sequencing (RNA-Seq) is a revolutionary tool that has been used extensively in cancer research. However, no existing RNA-Seq database provides all of the following features: (i) large-scale and comprehensive data archives and analyses, including coding-transcript profiling, long non-coding RNA (lncRNA) profiling and coexpression networks; (ii) phenotype-oriented data organization and searching and (iii) the visualization of expression profiles, differential expression and regulatory networks. We have constructed the first public database that meets these criteria, the Cancer RNA-Seq Nexus (CRN, http://syslab4.nchu.edu.tw/CRN). CRN has a user-friendly web interface designed to facilitate cancer research and personalized medicine. It is an open resource for intuitive data exploration, providing coding-transcript/lncRNA expression profiles to support researchers generating new hypotheses in cancer research and personalized medicine.",
"title": ""
},
{
"docid": "da1990ef0bb7ca5e184c32f33a0a8799",
"text": "Deconvolutional layers have been widely used in a variety of deep models for up-sampling, including encoder-decoder networks for semantic segmentation and deep generative models for unsupervised learning. One of the key limitations of deconvolutional operations is that they result in the so-called checkerboard problem. This is caused by the fact that no direct relationship exists among adjacent pixels on the output feature map. To address this problem, we propose the pixel deconvolutional layer (PixelDCL) to establish direct relationships among adjacent pixels on the up-sampled feature map. Our method is based on a fresh interpretation of the regular deconvolution operation. The resulting PixelDCL can be used to replace any deconvolutional layer in a plug-and-play manner without compromising the fully trainable capabilities of original models. The proposed PixelDCL may result in slight decrease in efficiency, but this can be overcome by an implementation trick. Experimental results on semantic segmentation demonstrate that PixelDCL can consider spatial features such as edges and shapes and yields more accurate segmentation outputs than deconvolutional layers. When used in image generation tasks, our PixelDCL can largely overcome the checkerboard problem suffered by regular deconvolution operations.",
"title": ""
},
{
"docid": "cd12564b6875ddc972334f45bbf41ab9",
"text": "Purpose – The purpose of this paper is to review the literature on Total Productive Maintenance (TPM) and to present an overview of TPM implementation practices adopted by the manufacturing organizations. It also seeks to highlight appropriate enablers and success factors for eliminating barriers in successful TPM implementation. Design/methodology/approach – The paper systematically categorizes the published literature and then analyzes and reviews it methodically. Findings – The paper reveals the important issues in Total Productive Maintenance ranging from maintenance techniques, framework of TPM, overall equipment effectiveness (OEE), TPM implementation practices, barriers and success factors in TPM implementation, etc. The contributions of strategic TPM programmes towards improving manufacturing competencies of the organizations have also been highlighted here. Practical implications – The literature on classification of Total Productive Maintenance has so far been very limited. The paper reviews a large number of papers in this field and presents the overview of various TPM implementation practices demonstrated by manufacturing organizations globally. It also highlights the approaches suggested by various researchers and practitioners and critically evaluates the reasons behind failure of TPM programmes in the organizations. Further, the enablers and success factors for TPM implementation have also been highlighted for ensuring smooth and effective TPM implementation in the organizations. Originality/value – The paper contains a comprehensive listing of publications on the field in question and their classification according to various attributes. It will be useful to researchers, maintenance professionals and others concerned with maintenance to understand the significance of TPM.",
"title": ""
},
{
"docid": "3d0b50111f6c9168b8a269a7d99d8fbc",
"text": "Detecting lies is crucial in many areas, such as airport security, police investigations, counter-terrorism, etc. One technique to detect lies is through the identification of facial micro-expressions, which are brief, involuntary expressions shown on the face of humans when they are trying to conceal or repress emotions. Manual measurement of micro-expressions is hard labor, time consuming, and inaccurate. This paper presents the Design and Development of a Lie Detection System using Facial Micro-Expressions. It is an automated vision system designed and implemented using LabVIEW. An Embedded Vision System (EVS) is used to capture the subject's interview. Then, a LabVIEW program converts the video into series of frames and processes the frames, each at a time, in four consecutive stages. The first two stages deal with color conversion and filtering. The third stage applies geometric-based dynamic templates on each frame to specify key features of the facial structure. The fourth stage extracts the needed measurements in order to detect facial micro-expressions to determine whether the subject is lying or not. Testing results show that this system can be used for interpreting eight facial expressions: happiness, sadness, joy, anger, fear, surprise, disgust, and contempt, and detecting facial micro-expressions. It extracts accurate output that can be employed in other fields of studies such as psychological assessment. The results indicate high precision that allows future development of applications that respond to spontaneous facial expressions in real time.",
"title": ""
},
{
"docid": "d94a4f07939c0f420787b099336f426b",
"text": "A next generation of AESA antennas will be challenged with the need for lower size, weight, power and cost (SWAP-C). This leads to enhanced demands especially with regard to the integration density of the RF-part inside a T/R module. The semiconductor material GaN has proven its capacity for high power amplifiers, robust receive components as well as switch components for separation of transmit and receive mode. This paper will describe the design and measurement results of a GaN-based single-chip T/R module frontend (HPA, LNA and SPDT) using UMS GH25 technology and covering the frequency range from 8 GHz to 12 GHz. Key performance parameters of the frontend are 13 W minimum transmit (TX) output power over the whole frequency range with peak power up to 17 W. The frontend in receive (RX) mode has a noise figure below 3.2 dB over the whole frequency range, and can survive more than 5 W input power. The large signal insertion loss of the used SPDT is below 0.9 dB at 43 dBm input power level.",
"title": ""
},
{
"docid": "92d047856fdf20b41c4f673aae2ced66",
"text": "This paper presents Merlin, a new framework for managing resources in software-defined networks. With Merlin, administrators express high-level policies using programs in a declarative language. The language includes logical predicates to identify sets of packets, regular expressions to encode forwarding paths, and arithmetic formulas to specify bandwidth constraints. The Merlin compiler maps these policies into a constraint problem that determines bandwidth allocations using parameterizable heuristics. It then generates code that can be executed on the network elements to enforce the policies. To allow network tenants to dynamically adapt policies to their needs, Merlin provides mechanisms for delegating control of sub-policies and for verifying that modifications made to sub-policies do not violate global constraints. Experiments demonstrate the expressiveness and effectiveness of Merlin on real-world topologies and applications. Overall, Merlin simplifies network administration by providing high-level abstractions for specifying network policies that provision network resources.",
"title": ""
},
{
"docid": "cd863a82161f4b28cc43eeda21e01a65",
"text": "Face aging, which renders aging faces for an input face, has attracted extensive attention in the multimedia research. Recently, several conditional Generative Adversarial Nets (GANs) based methods have achieved great success. They can generate images fitting the real face distributions conditioned on each individual age group. However, these methods fail to capture the transition patterns, e.g., the gradual shape and texture changes between adjacent age groups. In this paper, we propose a novel Contextual Generative Adversarial Nets (C-GANs) to specifically take it into consideration. The C-GANs consists of a conditional transformation network and two discriminative networks. The conditional transformation network imitates the aging procedure with several specially designed residual blocks. The age discriminative network guides the synthesized face to fit the real conditional distribution. The transition pattern discriminative network is novel, aiming to distinguish the real transition patterns with the fake ones. It serves as an extra regularization term for the conditional transformation network, ensuring the generated image pairs to fit the corresponding real transition pattern distribution. Experimental results demonstrate the proposed framework produces appealing results by comparing with the state-of-the-art and ground truth. We also observe performance gain for cross-age face verification.",
"title": ""
},
{
"docid": "7c2960e9fd059e57b5a0172e1d458250",
"text": "The main goal of this research is to discover the structure of home appliances usage patterns, hence providing more intelligence in smart metering systems by taking into account the usage of selected home appliances and the time of their usage. In particular, we present and apply a set of unsupervised machine learning techniques to reveal specific usage patterns observed at an individual household. The work delivers the solutions applicable in smart metering systems that might: (1) contribute to higher energy awareness; (2) support accurate usage forecasting; and (3) provide the input for demand response systems in homes with timely energy saving recommendations for users. The results provided in this paper show that determining household characteristics from smart meter data is feasible and allows for quickly grasping general trends in data.",
"title": ""
},
{
"docid": "2903e8be6b9a3f8dc818a57197ec1bee",
"text": "A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flows to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood.",
"title": ""
},
{
"docid": "e32c8589a92a92ab8fd876bb760fb98e",
"text": "The importance of the social sciences for medical informatics is increasingly recognized. As ICT requires inter-action with people and thereby inevitably affects them, understanding ICT requires a focus on the interrelation between technology and its social environment. Sociotechnical approaches increase our understanding of how ICT applications are developed, introduced and become a part of social practices. Socio-technical approaches share several starting points: 1) they see health care work as a social, 'real life' phenomenon, which may seem 'messy' at first, but which is guided by a practical rationality that can only be overlooked at a high price (i.e. failed systems). 2) They see technological innovation as a social process, in which organizations are deeply affected. 3) Through in-depth, formative evaluation, they can help improve system design and implementation.",
"title": ""
},
{
"docid": "0ff27e119ec045674b9111bb5a9e5d29",
"text": "Description: This book provides an introduction to the complex field of ubiquitous computing Ubiquitous Computing (also commonly referred to as Pervasive Computing) describes the ways in which current technological models, based upon three base designs: smart (mobile, wireless, service) devices, smart environments (of embedded system devices) and smart interaction (between devices), relate to and support a computing vision for a greater range of computer devices, used in a greater range of (human, ICT and physical) environments and activities. The author details the rich potential of ubiquitous computing, the challenges involved in making it a reality, and the prerequisite technological infrastructure. Additionally, the book discusses the application and convergence of several current major and future computing trends.-Provides an introduction to the complex field of ubiquitous computing-Describes how current technology models based upon six different technology form factors which have varying degrees of mobility wireless connectivity and service volatility: tabs, pads, boards, dust, skins and clay, enable the vision of ubiquitous computing-Describes and explores how the three core designs (smart devices, environments and interaction) based upon current technology models can be applied to, and can evolve to, support a vision of ubiquitous computing and computing for the future-Covers the principles of the following current technology models, including mobile wireless networks, service-oriented computing, human computer interaction, artificial intelligence, context-awareness, autonomous systems, micro-electromechanical systems, sensors, embedded controllers and robots-Covers a range of interactions, between two or more UbiCom devices, between devices and people (HCI), between devices and the physical world.-Includes an accompanying website with PowerPoint slides, problems and solutions, exercises, bibliography and further reading Graduate students in computer science, electrical engineering and telecommunications courses will find this a fascinating and useful introduction to the subject. It will also be of interest to ICT professionals, software and network developers and others interested in future trends and models of computing and interaction over the next decades.",
"title": ""
},
{
"docid": "cff0b5c06b322c887aed9620afeac668",
"text": "In addition to providing substantial performance enhancements, future 5G networks will also change the mobile network ecosystem. Building on the network slicing concept, 5G allows to “slice” the network infrastructure into separate logical networks that may be operated independently and targeted at specific services. This opens the market to new players: the infrastructure provider, which is the owner of the infrastructure, and the tenants, which may acquire a network slice from the infrastructure provider to deliver a specific service to their customers. In this new context, we need new algorithms for the allocation of network resources that consider these new players. In this paper, we address this issue by designing an algorithm for the admission and allocation of network slices requests that (i) maximises the infrastructure provider's revenue and (ii) ensures that the service guarantees provided to tenants are satisfied. Our key contributions include: (i) an analytical model for the admissibility region of a network slicing-capable 5G Network, (ii) the analysis of the system (modelled as a Semi-Markov Decision Process) and the optimisation of the infrastructure provider's revenue, and (iii) the design of an adaptive algorithm (based on Q-learning) that achieves close to optimal performance.",
"title": ""
},
{
"docid": "b9720d1350bf89c8a94bb30276329ce2",
"text": "Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for concepts whose data are sequences, structured data or graphs.",
"title": ""
},
{
"docid": "adad5599122e63cde59322b7ba46461b",
"text": "Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance i.e. they respond systematically to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning system significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in a disjoint domain.",
"title": ""
},
{
"docid": "1be35b9562a428a7581541559dc16bd8",
"text": "OBJECTIVE\nTo assess the effect of virtual reality training on an actual laparoscopic operation.\n\n\nDESIGN\nProspective randomised controlled and blinded trial.\n\n\nSETTING\nSeven gynaecological departments in the Zeeland region of Denmark.\n\n\nPARTICIPANTS\n24 first and second year registrars specialising in gynaecology and obstetrics.\n\n\nINTERVENTIONS\nProficiency based virtual reality simulator training in laparoscopic salpingectomy and standard clinical education (controls).\n\n\nMAIN OUTCOME MEASURE\nThe main outcome measure was technical performance assessed by two independent observers blinded to trainee and training status using a previously validated general and task specific rating scale. The secondary outcome measure was operation time in minutes.\n\n\nRESULTS\nThe simulator trained group (n=11) reached a median total score of 33 points (interquartile range 32-36 points), equivalent to the experience gained after 20-50 laparoscopic procedures, whereas the control group (n=10) reached a median total score of 23 (22-27) points, equivalent to the experience gained from fewer than five procedures (P<0.001). The median total operation time in the simulator trained group was 12 minutes (interquartile range 10-14 minutes) and in the control group was 24 (20-29) minutes (P<0.001). The observers' inter-rater agreement was 0.79.\n\n\nCONCLUSION\nSkills in laparoscopic surgery can be increased in a clinically relevant manner using proficiency based virtual reality simulator training. The performance level of novices was increased to that of intermediately experienced laparoscopists and operation time was halved. Simulator training should be considered before trainees carry out laparoscopic procedures.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT00311792.",
"title": ""
},
{
"docid": "a7accee00559a544a3715acacffdd37d",
"text": "Engagement is complex and multifaceted, but crucial to learning. Computerized learning environments can provide a superior learning experience for students by automatically detecting student engagement (and, thus also disengagement) and adapting to it. This paper describes results from several previous studies that utilized facial features to automatically detect student engagement, and proposes new methods to expand and improve results. Videos of students will be annotated by third-party observers as mind wandering (disengaged) or not mind wandering (engaged). Automatic detectors will also be trained to classify the same videos based on students' facial features, and compared to the machine predictions. These detectors will then be improved by engineering features to capture facial expressions noted by observers and more heavily weighting training instances that were exceptionally-well classified by observers. Finally, implications of previous results and proposed work are discussed.",
"title": ""
},
{
"docid": "c1338abb3ddd4acb1ba7ed7ac0c4452c",
"text": "Defect prediction models that are trained on class imbalanced datasets (i.e., the proportion of defective and clean modules is not equally represented) are highly susceptible to produce inaccurate prediction models. Prior research compares the impact of class rebalancing techniques on the performance of defect prediction models. Prior research efforts arrive at contradictory conclusions due to the use of different choice of datasets, classification techniques, and performance measures. Such contradictory conclusions make it hard to derive practical guidelines for whether class rebalancing techniques should be applied in the context of defect prediction models. In this paper, we investigate the impact of 4 popularly-used class rebalancing techniques on 10 commonly-used performance measures and the interpretation of defect prediction models. We also construct statistical models to better understand in which experimental design settings that class rebalancing techniques are beneficial for defect prediction models. Through a case study of 101 datasets that span across proprietary and open-source systems, we recommend that class rebalancing techniques are necessary when quality assurance teams wish to increase the completeness of identifying software defects (i.e., Recall). However, class rebalancing techniques should be avoided when interpreting defect prediction models. We also find that class rebalancing techniques do not impact the AUC measure. Hence, AUC should be used as a standard measure when comparing defect prediction models.",
"title": ""
}
] | scidocsrr |
9e44f01957f05b39a959becfb42b17e9 | Rainmakers: why bad weather means good productivity. | [
{
"docid": "13c6e4fc3a20528383ef7625c9dd2b79",
"text": "Seasonal affective disorder (SAD) is a syndrome characterized by recurrent depressions that occur annually at the same time each year. We describe 29 patients with SAD; most of them had a bipolar affective disorder, especially bipolar II, and their depressions were generally characterized by hypersomnia, overeating, and carbohydrate craving and seemed to respond to changes in climate and latitude. Sleep recordings in nine depressed patients confirmed the presence of hypersomnia and showed increased sleep latency and reduced slow-wave (delta) sleep. Preliminary studies in 11 patients suggest that extending the photoperiod with bright artificial light has an antidepressant effect.",
"title": ""
}
] | [
{
"docid": "1bdbfe7d11ca567adcce97a853761939",
"text": "Dynamic contrast enhanced MRI (DCE-MRI) is an emerging imaging protocol in locating, identifying and characterizing breast cancer. However, due to image artifacts in MR, pixel intensity alone cannot accurately characterize the tissue properties. We propose a robust method based on the temporal sequence of textural change and wavelet transform for pixel-by-pixel classification. We first segment the breast region using an active contour model. We then compute textural change on pixel blocks. We apply a three-scale discrete wavelet transform on the texture temporal sequence to further extract frequency features. We employ a progressive feature selection scheme and a committee of support vector machines for the classification. We trained the system on ten cases and tested it on eight independent test cases. Receiver-operating characteristics (ROC) analysis shows that the texture temporal sequence (Az: 0.966 and 0.949 in training and test) is much more effective than the intensity sequence (Az: 0.871 and 0.868 in training and test). The wavelet transform further improves the classification performance (Az: 0.989 and 0.984 in training and test).",
"title": ""
},
{
"docid": "345a59aac1e89df5402197cca90ca464",
"text": "Tony Velkov,* Philip E. Thompson, Roger L. Nation, and Jian Li* School of Medicine, Deakin University, Pigdons Road, Geelong 3217, Victoria, Australia, Medicinal Chemistry and Drug Action and Facility for Anti-infective Drug Development and Innovation, Drug Delivery, Disposition and Dynamics, Monash Institute of Pharmaceutical Sciences, Monash University, 381 Royal Parade, Parkville 3052, Victoria, Australia",
"title": ""
},
{
"docid": "ffca07962ddcdfa0d016df8020488b5d",
"text": "Differential-drive mobile robots are usually equipped with video-cameras for navigation purposes. In order to ensure proper operational capabilities of such systems, several calibration steps are required to estimate the following quantities: the video-camera intrinsic and extrinsic parameters, the relative pose between the camera and the vehicle frame and, finally, the odometric parameters of the vehicle. In this paper the simultaneous estimation of the above mentioned quantities is achieved by a systematic and effective calibration procedure that does not require any iterative step. The calibration procedure needs only on-board measurements given by the wheels encoders, the camera and a number of properly taken camera snapshots of a set of known landmarks. Numerical simulations and experimental results with a mobile robot Khepera III equipped with a low-cost camera confirm the effectiveness of the proposed technique.",
"title": ""
},
{
"docid": "019c27341b9811a7347467490cea6a72",
"text": "For intelligent robots to interact in meaningful ways with their environment, they must understand both the geometric and semantic properties of the scene surrounding them. The majority of research to date has addressed these mapping challenges separately, focusing on either geometric or semantic mapping. In this paper we address the problem of building environmental maps that include both semantically meaningful, object-level entities and point- or mesh-based geometrical representations. We simultaneously build geometric point cloud models of previously unseen instances of known object classes and create a map that contains these object models as central entities. Our system leverages sparse, feature-based RGB-D SLAM, image-based deep-learning object detection and 3D unsupervised segmentation.",
"title": ""
},
{
"docid": "68b15f0708c256d674f018b667f97bb5",
"text": "Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, control-flow integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: It is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.",
"title": ""
},
{
"docid": "94160496e0a470dc278f71c67508ae21",
"text": "In this paper, we tackle the problem of co-localization in real-world images. Co-localization is the problem of simultaneously localizing (with bounding boxes) objects of the same class across a set of distinct images. Although similar problems such as co-segmentation and weakly supervised localization have been previously studied, we focus on being able to perform co-localization in real-world settings, which are typically characterized by large amounts of intra-class variation, inter-class diversity, and annotation noise. To address these issues, we present a joint image-box formulation for solving the co-localization problem, and show how it can be relaxed to a convex quadratic program which can be efficiently solved. We perform an extensive evaluation of our method compared to previous state-of-the-art approaches on the challenging PASCAL VOC 2007 and Object Discovery datasets. In addition, we also present a large-scale study of co-localization on ImageNet, involving ground-truth annotations for 3, 624 classes and approximately 1 million images.",
"title": ""
},
{
"docid": "f8724f8166eeb48461f9f4ac8fdd87d3",
"text": "The simultaneous use of images from different spectra can be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains.",
"title": ""
},
{
"docid": "9b52a659fb6383e92c5968a082b01b71",
"text": "The internet of things (IoT) has a variety of application domains, including smart homes. This paper analyzes distinct IoT security and privacy features, including security requirements, threat models, and attacks from the smart home perspective. Further, this paper proposes an intelligent collaborative security management model to minimize security risk. The security challenges of the IoT for a smart home scenario are encountered, and a comprehensive IoT security management for smart homes has been proposed.",
"title": ""
},
{
"docid": "36142a4c0639662fe52dcc3fdf7b1ca4",
"text": "We present hierarchical change-detection tests (HCDTs), as effective online algorithms for detecting changes in datastreams. HCDTs are characterized by a hierarchical architecture composed of a detection layer and a validation layer. The detection layer steadily analyzes the input datastream by means of an online, sequential CDT, which operates as a low-complexity trigger that promptly detects possible changes in the process generating the data. The validation layer is activated when the detection one reveals a change, and performs an offline, more sophisticated analysis on recently acquired data to reduce false alarms. Our experiments show that, when the process generating the datastream is unknown, as it is mostly the case in the real world, HCDTs achieve a far more advantageous tradeoff between false-positive rate and detection delay than their single-layered, more traditional counterpart. Moreover, the successful interplay between the two layers permits HCDTs to automatically reconfigure after having detected and validated a change. Thus, HCDTs are able to reveal further departures from the postchange state of the data-generating process.",
"title": ""
},
{
"docid": "bf50151700f0e286ee5aa3a2bd74c249",
"text": "Computer systems that augment the process of finding the right expert for a given problem in an organization or world-wide are becoming feasible more than ever before, thanks to the prevalence of corporate Intranets and the Internet. This paper investigates such systems in two parts. We first explore the expert finding problem in depth, review and analyze existing systems in this domain, and suggest a domain model that can serve as a framework for design and development decisions. Based on our analyses of the problem and solution spaces, we then bring to light the gaps that remain to be addressed. Finally, we present our approach called DEMOIR, which is a modular architecture for expert finding systems that is based on a centralized expertise modeling server while also incorporating decentralized components for expertise information gathering and exploitation.",
"title": ""
},
{
"docid": "ae1f75aa978fd702be9b203487269517",
"text": "This paper presents a system that performs skill extraction from text documents. It outputs a list of professional skills that are relevant to a given input text. We argue that the system can be practical for hiring and management of personnel in an organization. We make use of the texts and the hyperlink graph of Wikipedia, as well as a list of professional skills obtained from the LinkedIn social network. The system is based on first computing similarities between an input document and the texts of Wikipedia pages and then using a biased, hub-avoiding version of the Spreading Activation algorithm on the Wikipedia graph in order to associate the input document with skills.",
"title": ""
},
{
"docid": "aa3be1c132e741d2c945213cfb0d96ad",
"text": "Collaborative filtering (CF) is one of the most successful recommendation approaches. It typically associates a user with a group of like-minded users based on their preferences over all the items, and recommends to the user those items enjoyed by others in the group. However we find that two users with similar tastes on one item subset may have totally different tastes on another set. In other words, there exist many user-item subgroups each consisting of a subset of items and a group of like-minded users on these items. It is more natural to make preference predictions for a user via the correlated subgroups than the entire user-item matrix. In this paper, to find meaningful subgroups, we formulate the Multiclass Co-Clustering (MCoC) problem and propose an effective solution to it. Then we propose an unified framework to extend the traditional CF algorithms by utilizing the subgroups information for improving their top-N recommendation performance. Our approach can be seen as an extension of traditional clustering CF models. Systematic experiments on three real world data sets have demonstrated the effectiveness of our proposed approach.",
"title": ""
},
{
"docid": "2d105fcec4109a6bc290c616938012f3",
"text": "One of the biggest challenges in automated driving is the ability to determine the vehicleâĂŹs location in realtime - a process known as self-localization or ego-localization. An automated driving system must be reliable under harsh conditions and environmental uncertainties (e.g. GPS denial or imprecision), sensor malfunction, road occlusions, poor lighting, and inclement weather. To cope with this myriad of potential problems, systems typically consist of a GPS receiver, in-vehicle sensors (e.g. cameras and LiDAR devices), and 3D High-Definition (3D HD) Maps. In this paper, we review state-of-the-art self-localization techniques, and present a benchmark for the task of image-based vehicle self-localization. Our dataset was collected on 10km of the Warren Freeway in the San Francisco Area under reasonable traffic and weather conditions. As input to the localization process, we provide timestamp-synchronized, consumer-grade monocular video frames (with camera intrinsic parameters), consumer-grade GPS trajectory, and production-grade 3D HD Maps. For evaluation, we provide survey-grade GPS trajectory. The goal of this dataset is to standardize and formalize the challenge of accurate vehicle self-localization and provide a benchmark to develop and evaluate algorithms.",
"title": ""
},
{
"docid": "592431c03450be59f10e56dcabed0ebf",
"text": "Recent advances in machine learning have led to innovative applications and services that use computational structures to reason about complex phenomenon. Over the past several years, the security and machine-learning communities have developed novel techniques for constructing adversarial samples--malicious inputs crafted to mislead (and therefore corrupt the integrity of) systems built on computationally learned models. The authors consider the underlying causes of adversarial samples and the future countermeasures that might mitigate them.",
"title": ""
},
{
"docid": "98f8994f1ad9315f168878ff40c29afc",
"text": "OBJECTIVE\nSuicide remains a major global public health issue for young people. The reach and accessibility of online and social media-based interventions herald a unique opportunity for suicide prevention. To date, the large body of research into suicide prevention has been undertaken atheoretically. This paper provides a rationale and theoretical framework (based on the interpersonal theory of suicide), and draws on our experiences of developing and testing online and social media-based interventions.\n\n\nMETHOD\nThe implementation of three distinct online and social media-based intervention studies, undertaken with young people at risk of suicide, are discussed. We highlight the ways that these interventions can serve to bolster social connectedness in young people, and outline key aspects of intervention implementation and moderation.\n\n\nRESULTS\nInsights regarding the implementation of these studies include careful protocol development mindful of risk and ethical issues, establishment of suitably qualified teams to oversee development and delivery of the intervention, and utilisation of key aspects of human support (i.e., moderation) to encourage longer-term intervention engagement.\n\n\nCONCLUSIONS\nOnline and social media-based interventions provide an opportunity to enhance feelings of connectedness in young people, a key component of the interpersonal theory of suicide. Our experience has shown that such interventions can be feasibly and safely conducted with young people at risk of suicide. Further studies, with controlled designs, are required to demonstrate intervention efficacy.",
"title": ""
},
{
"docid": "36af986f61252f221a8135e80fe6432d",
"text": "This chapter considers a set of questions at the interface of the study of intuitive theories, causal knowledge, and problems of inductive inference. By an intuitive theory, we mean a cognitive structure that in some important ways is analogous to a scientific theory. It is becoming broadly recognized that intuitive theories play essential roles in organizing our most basic knowledge of the world, particularly for causal structures in physical, biological, psychological or social domains (Atran, 1995; Carey, 1985a; Kelley, 1973; McCloskey, 1983; Murphy & Medin, 1985; Nichols & Stich, 2003). A principal function of intuitive theories in these domains is to support the learning of new causal knowledge: generating and constraining people’s hypotheses about possible causal relations, highlighting variables, actions and observations likely to be informative about those hypotheses, and guiding people’s interpretation of the data they observe (Ahn & Kalish, 2000; Pazzani, 1987; Pazzani, Dyer & Flowers, 1986; Waldmann, 1996). Leading accounts of cognitive development argue for the importance of intuitive theories in children’s mental lives and frame the major transitions of cognitive development as instances of theory change (Carey, 1985a; Gopnik & Meltzoff, 1997; Inagaki & Hatano 2002; Wellman & Gelman, 1992). Here we attempt to lay out some prospects for understanding the structure, function, and acquisition of intuitive theories from a rational computational perspective. From this viewpoint, theory-like representations are not just a convenient way of summarizing certain aspects of human knowledge. They provide crucial foundations for successful learning and reasoning, and we want to understand how they do so. With this goal in mind, we focus on",
"title": ""
},
{
"docid": "45f120b05b3c48cd95d5dd55031987cb",
"text": "n engl j med 359;6 www.nejm.org august 7, 2008 628 From the Department of Medicine (O.O.F., E.S.A.) and the Division of Infectious Diseases (P.A.M.), Johns Hopkins Bayview Medical Center, Johns Hopkins School of Medicine, Baltimore; the Division of Infectious Diseases (D.R.K.) and the Division of General Medicine (S.S.), University of Michigan Medical School, Ann Arbor; and the Department of Veterans Affairs Health Services Research and Development Center of Excellence, Ann Arbor, MI (S.S.). Address reprint requests to Dr. Antonarakis at the Johns Hopkins Bayview Medical Center, Department of Medicine, B-1 North, 4940 Eastern Ave., Baltimore, MD 21224, or at eantona1@ jhmi.edu.",
"title": ""
},
{
"docid": "d11a113fdb0a30e2b62466c641e49d6d",
"text": "Apache Spark has emerged as the de facto framework for big data analytics with its advanced in-memory programming model and upper-level libraries for scalable machine learning, graph analysis, streaming and structured data processing. It is a general-purpose cluster computing framework with language-integrated APIs in Scala, Java, Python and R. As a rapidly evolving open source project, with an increasing number of contributors from both academia and industry, it is difficult for researchers to comprehend the full body of development and research behind Apache Spark, especially those who are beginners in this area. In this paper, we present a technical review on big data analytics using Apache Spark. This review focuses on the key components, abstractions and features of Apache Spark. More specifically, it shows what Apache Spark has for designing and implementing big data algorithms and pipelines for machine learning, graph analysis and stream processing. In addition, we highlight some research and development directions on Apache Spark for big data analytics.",
"title": ""
},
{
"docid": "5e124199283b333e9b12877fd69dd051",
"text": "One of the major concerns of Integrated Traffic Management System (ITMS) in India is the identification of vehicles violating the stop-line at a road crossing. A large number of Indian vehicles do not stop at the designated stop-line and pose serious threat to the pedestrians crossing the roads. The current work reports the technicalities of the i $$ i $$ LPR (Indian License Plate Recognition) system implemented at five busy road-junctions in one populous metro city in India. The designed system is capable of localizing single line and two-line license plates of various sizes and shapes, recognizing characters of standard/ non-standard fonts and performing seamlessly in varying weather conditions. The performance of the system is evaluated with a large database of images for different environmental conditions. We have published a limited database of Indian vehicle images in http://code.google.com/p/cmaterdb/ for non-commercial use by fellow researchers. Despite unparallel complexity in the Indian city-traffic scenario, we have achieved around 92 % plate localization accuracy and 92.75 % plate level recognition accuracy over the localized vehicle images.",
"title": ""
},
{
"docid": "9cb28706a45251e3d2fb5af64dd9351f",
"text": "This article proposes an informational perspective on comparison consequences in social judgment. It is argued that to understand the variable consequences of comparison, one has to examine what target knowledge is activated during the comparison process. These informational underpinnings are conceptualized in a selective accessibility model that distinguishes 2 fundamental comparison processes. Similarity testing selectively makes accessible knowledge indicating target-standard similarity, whereas dissimilarity testing selectively makes accessible knowledge indicating target-standard dissimilarity. These respective subsets of target knowledge build the basis for subsequent target evaluations, so that similarity testing typically leads to assimilation whereas dissimilarity testing typically leads to contrast. The model is proposed as a unifying conceptual framework that integrates diverse findings on comparison consequences in social judgment.",
"title": ""
}
] | scidocsrr |
1b0fabf5c29000d15c6e1b2dd6eba2cc | Photometric stereo and weather estimation using internet images | [
{
"docid": "5cfc4911a59193061ab55c2ce5013272",
"text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.",
"title": ""
},
{
"docid": "f085832faf1a2921eedd3d00e8e592db",
"text": "There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like “Notre Dame” or “Trevi Fountain.” This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world’s well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.",
"title": ""
},
{
"docid": "1b6ddffacc50ad0f7e07675cfe12c282",
"text": "Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.",
"title": ""
}
] | [
{
"docid": "362b1a5119733eba058d1faab2d23ebf",
"text": "§ Mission and structure of the project. § Overview of the Stone Man version of the Guide to the SWEBOK. § Status and development process of the Guide. § Applications of the Guide in the fields of education, human resource management, professional development and licensing and certification. § Class exercise in applying the Guide to defining the competencies needed to support software life cycle process deployment. § Strategy for uptake and promotion of the Guide. § Discussion of promotion, trial usage and experimentation. Workshop Leaders:",
"title": ""
},
{
"docid": "f7ce2995fc0369fb8198742a5f1fefa3",
"text": "In this paper, we present a novel method for multimodal gesture recognition based on neural networks. Our multi-stream recurrent neural network (MRNN) is a completely data-driven model that can be trained from end to end without domain-specific hand engineering. The MRNN extends recurrent neural networks with Long Short-Term Memory cells (LSTM-RNNs) that facilitate the handling of variable-length gestures. We propose a recurrent approach for fusing multiple temporal modalities using multiple streams of LSTM-RNNs. In addition, we propose alternative fusion architectures and empirically evaluate the performance and robustness of these fusion strategies. Experimental results demonstrate that the proposed MRNN outperforms other state-of-theart methods in the Sheffield Kinect Gesture (SKIG) dataset, and has significantly high robustness to noisy inputs.",
"title": ""
},
{
"docid": "43baeb87f1798d52399ba8c78ffa7fef",
"text": "ECONOMISTS are frequently asked to measure the effects of an economic event on the value of firms. On the surface this seems like a difficult task, but a measure can be constructed easily using an event study. Using financial market data, an event study measures the impact of a specific event on the value of a firm. The usefulness of such a study comes from the fact that, given rationality in the marketplace, the effects of an event will be reflected immediately in security prices. Thus a measure of the event’s economic impact can be constructed using security prices observed over a relatively short time period. In contrast, direct productivity related measures may require many months or even years of observation. The event study has many applications. In accounting and finance research, event studies have been applied to a variety of firm specific and economy wide events. Some examples include mergers and acquisitions, earnings announcements, issues of new debt or equity, and announcements of macroeconomic variables such as the trade deficit.1 However, applications in other fields are also abundant. For example, event studies are used in the field of law and economics to measure the impact on the value of a firm of a change in the regulatory environment (see G. William Schwert 1981) and in legal liability cases event studies are used to assess damages (see Mark Mitchell and Jeffry Netter 1994). In the majority of applications, the focus is the effect of an event on the price of a particular class of securities of the firm, most often common equity. In this paper the methodology is discussed in terms of applications that use common equity. However, event studies can be applied using debt securities with little modification. Event studies have a long history. Perhaps the first published study is James Dolley (1933). In this work, he examines the price effects of stock splits, studying nominal price changes at the time of the split. Using a sample of 95 splits from 1921 to 1931, he finds that the price in-",
"title": ""
},
{
"docid": "97decda9a345d39e814e19818eebe8b8",
"text": "In this review article, we present some challenges and opportunities in Ambient Assisted Living (AAL) for disabled and elderly people addressing various state of the art and recent approaches particularly in artificial intelligence, biomedical engineering, and body sensor networking.",
"title": ""
},
{
"docid": "7bea13124037f4e21b918f08c81b9408",
"text": "U.S. health care system is plagued by rising cost and limited access. While the cost of care is increasing faster than the rate of inflation, people living in rural areas have very limited access to quality health care due to a shortage of physicians and facilities in these areas. Information and communication technologies in general and telemedicine in particular offer great promise to extend quality care to underserved rural communities at an affordable cost. However, adoption of telemedicine among the various stakeholders of the health care system has not been very encouraging. Based on an analysis of the extant research literature, this study identifies critical factors that impede the adoption of telemedicine, and offers suggestions to mitigate these challenges.",
"title": ""
},
{
"docid": "a2f46b51b65c56acf6768f8e0d3feb79",
"text": "In this paper we introduce Linear Relational Embedding as a means of learning a distributed representation of concepts from data consisting of binary relations between concepts. The key idea is to represent concepts as vectors, binary relations as matrices, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related concept. A representation for concepts and relations is learned by maximizing an appropriate discriminative goodness function using gradient ascent. On a task involving family relationships, learning is fast and leads to good generalization. Learning Distributed Representations of Concepts using Linear Relational Embedding Alberto Paccanaro Geoffrey Hinton Gatsby Unit",
"title": ""
},
{
"docid": "50dc3186ad603ef09be8cca350ff4d77",
"text": "Design iteration time in SoC design flow is reduced through performance exploration at a higher level of abstraction. This paper proposes an accurate and fast performance analysis method in early stage of design process using a behavioral model written in C/C++ language. We made a cycle-accurate but fast and flexible compiled instruction set simulator (ISS) and IP models that represent hardware functionality and performance. System performance analyzer configured by the target communication architecture analyzes the performance utilizing event-traces obtained by running the ISS and IP models. This solution is automated and implemented in the tool, HIPA. We obtain diverse performance profiling results and achieve 95% accuracy using an abstracted C model. We also achieve about 20 times speed-up over corresponding co-simulation tools.",
"title": ""
},
{
"docid": "50b6f8067784fe4b9b3adf6db17ab4d1",
"text": "Available online 23 November 2012",
"title": ""
},
{
"docid": "e3e024fa2ee468fb2a64bfc8ddf69467",
"text": "We used two methods to estimate short-wave (S) cone spectral sensitivity. Firstly, we measured S-cone thresholds centrally and peripherally in five trichromats, and in three blue-cone monochromats, who lack functioning middle-wave (M) and long-wave (L) cones. Secondly, we analyzed standard color-matching data. Both methods yielded equivalent results, on the basis of which we propose new S-cone spectral sensitivity functions. At short and middle-wavelengths, our measurements are consistent with the color matching data of Stiles and Burch (1955, Optica Acta, 2, 168-181; 1959, Optica Acta, 6, 1-26), and other psychophysically measured functions, such as pi 3 (Stiles, 1953, Coloquio sobre problemas opticos de la vision, 1, 65-103). At longer wavelengths, S-cone sensitivity has previously been over-estimated.",
"title": ""
},
{
"docid": "f159ee79d20f00194402553758bcd031",
"text": "Recently, narrowband Internet of Things (NB-IoT), one of the most promising low power wide area (LPWA) technologies, has attracted much attention from both academia and industry. It has great potential to meet the huge demand for machine-type communications in the era of IoT. To facilitate research on and application of NB-IoT, in this paper, we design a system that includes NB devices, an IoT cloud platform, an application server, and a user app. The core component of the system is to build a development board that integrates an NB-IoT communication module and a subscriber identification module, a micro-controller unit and power management modules. We also provide a firmware design for NB device wake-up, data sensing, computing and communication, and the IoT cloud configuration for data storage and analysis. We further introduce a framework on how to apply the proposed system to specific applications. The proposed system provides an easy approach to academic research as well as commercial applications.",
"title": ""
},
{
"docid": "a036dd162a23c5d24125d3270e22aaf7",
"text": "1 Problem Description This work is focused on the relationship between the news articles (breaking news) and stock prices. The student will design and develop methods to analyze how and when the news articles influence the stock market. News articles about Norwegian oil related companies and stock prices from \" BW Offshore Limited \" (BWO), \" DNO International \" (DNO), \" Frontline \" (FRO), \" Petroleum Geo-Services \" (PGS), \" Seadrill \" (SDRL), \" Sevan Marine \" (SEVAN), \" Siem Offshore \" (SIOFF), \" Statoil \" (STL) and \" TGS-NOPEC Geophysical Company \" (TGS) will be crawled, preprocessed and the important features in the text will be extracted to effectively represent the news in a form that allows the application of computational techniques. This data will then be used to train text sense classifiers. A prototype system that employs such classifiers will be developed to support the trader in taking sell/buy decisions. Methods will be developed for automaticall sense-labeling of news that are informed by the correlation between the changes in the stock prices and the breaking news. Performance of the prototype decision support system will be compared with a chosen baseline method for trade-related decision making. Abstract This thesis investigates the prediction of possible stock price changes immediately after news article publications. This is done by automatic analysis of these news articles. Some background information about financial trading theory and text mining is given in addition to an overview of earlier related research in the field of automatic news article analyzes with the purpose of predicting future stock prices. In this thesis a system is designed and implemented to predict stock price trends for the time immediately after the publication of news articles. This system consists mainly of four components. The first component gathers news articles and stock prices automatically from internet. The second component prepares the news articles by sending them to some document preprocessing steps and finding relevant features before they are sent to a document representation process. The third component categorizes the news articles into predefined categories, and finally the fourth component applies appropriate trading strategies depending on the category of the news article. This system requires a labeled data set to train the categorization component. This data set is labeled automatically on the basis of the price trends directly after the news article publication. An additional label refining step using clustering is added in an …",
"title": ""
},
{
"docid": "4387549562fe2c0833b002d73d9a8330",
"text": "Complex numbers have long been favoured for digital signal processing, yet complex representations rarely appear in deep learning architectures. RNNs, widely used to process time series and sequence information, could greatly benefit from complex representations. We present a novel complex gate recurrent cell. When used together with norm-preserving state transition matrices, our complex gated RNN exhibits excellent stability and convergence properties. We demonstrate competitive performance of our complex gated RNN on the synthetic memory and adding task, as well as on the real-world task of human motion prediction.",
"title": ""
},
{
"docid": "9cbd8a5ac00fc940baa63cf0fb4d2220",
"text": "— The paper presents a technique for anomaly detection in user behavior in a smart-home environment. Presented technique can be used for a service that learns daily patterns of the user and proactively detects unusual situations. We have identified several drawbacks of previously presented models such as: just one type of anomaly-inactivity, intricate activity classification into hierarchy, detection only on a daily basis. Our novelty approach desists these weaknesses, provides additional information if the activity is unusually short/long, at unusual location. It is based on a semi-supervised clustering model that utilizes the neural network Self-Organizing Maps. The input to the system represents data primarily from presence sensors, however also other sensors with binary output may be used. The experimental study is realized on both synthetic data and areal database collected in our own smart-home installation for the period of two months.",
"title": ""
},
{
"docid": "c751115c128fd0776baf212ae19624ff",
"text": "This paper presents a natural language interface to relational database. It introduces some classical NLDBI products and their applications and proposes the architecture of a new NLDBI system including its probabilistic context free grammar, the inside and outside probabilities which can be used to construct the parse tree, an algorithm to calculate the probabilities, and the usage of dependency structures and verb subcategorization in analyzing the parse tree. Some experiment results are given to conclude the paper.",
"title": ""
},
{
"docid": "7d11d25dc6cd2822d7f914b11b7fe640",
"text": "The authors analyze three critical components in training word embeddings: model, corpus, and training parameters. They systematize existing neural-network-based word embedding methods and experimentally compare them using the same corpus. They then evaluate each word embedding in three ways: analyzing its semantic properties, using it as a feature for supervised tasks, and using it to initialize neural networks. They also provide several simple guidelines for training good word embeddings.",
"title": ""
},
{
"docid": "a23949a678e49a7e1495d98aae3adef2",
"text": "The continued increase in the usage of Small Scale Digital Devices (SSDDs) to browse the web has made mobile devices a rich potential for digital evidence. Issues may arise when suspects attempt to hide their browsing habits using applications like Orweb - which intends to anonymize network traffic as well as ensure that no browsing history is saved on the device. In this work, the researchers conducted experiments to examine if digital evidence could be reconstructed when the Orweb browser is used as a tool to hide web browsing activates on an Android smartphone. Examinations were performed on both a non-rooted and a rooted Samsung Galaxy S2 smartphone running Android 2.3.3. The results show that without rooting the device, no private web browsing traces through Orweb were found. However, after rooting the device, the researchers were able to locate Orweb browser history, and important corroborative digital evidence was found.",
"title": ""
},
{
"docid": "4b6755737ad43dec49e470220a24236a",
"text": "We address the issue of automatically extracting rhythm descriptors from audio signals, to be eventually used in content-based musical applications such as in the context of MPEG7. Our aim is to approach the comprehension of auditory scenes in raw polyphonic audio signals without preliminary source separation. As a first step towards the automatic extraction of rhythmic structures out of signals taken from the popular music repertoire, we propose an approach for automatically extracting time indexes of occurrences of different percussive timbres in an audio signal. Within this framework, we found that a particular issue lies in the classification of percussive sounds. In this paper, we report on the method currently used to deal with this problem.",
"title": ""
},
{
"docid": "b1a538752056e91fd5800911f36e6eb0",
"text": "BACKGROUND\nThe current, so-called \"Millennial\" generation of learners is frequently characterized as having deep understanding of, and appreciation for, technology and social connectedness. This generation of learners has also been molded by a unique set of cultural influences that are essential for medical educators to consider in all aspects of their teaching, including curriculum design, student assessment, and interactions between faculty and learners.\n\n\nAIM\n The following tips outline an approach to facilitating learning of our current generation of medical trainees.\n\n\nMETHOD\n The method is based on the available literature and the authors' experiences with Millennial Learners in medical training.\n\n\nRESULTS\n The 12 tips provide detailed approaches and specific strategies for understanding and engaging Millennial Learners and enhancing their learning.\n\n\nCONCLUSION\n With an increased understanding of the characteristics of the current generation of medical trainees, faculty will be better able to facilitate learning and optimize interactions with Millennial Learners.",
"title": ""
}
] | scidocsrr |
43eb39b8a39919d4867a75fa54b29c66 | Predicting Suicidal Behavior From Longitudinal Electronic Health Records. | [
{
"docid": "1c9644fa4e259da618d5371512f1e73d",
"text": "Suicidal behavior is a leading cause of injury and death worldwide. Information about the epidemiology of such behavior is important for policy-making and prevention. The authors reviewed government data on suicide and suicidal behavior and conducted a systematic review of studies on the epidemiology of suicide published from 1997 to 2007. The authors' aims were to examine the prevalence of, trends in, and risk and protective factors for suicidal behavior in the United States and cross-nationally. The data revealed significant cross-national variability in the prevalence of suicidal behavior but consistency in age of onset, transition probabilities, and key risk factors. Suicide is more prevalent among men, whereas nonfatal suicidal behaviors are more prevalent among women and persons who are young, are unmarried, or have a psychiatric disorder. Despite an increase in the treatment of suicidal persons over the past decade, incidence rates of suicidal behavior have remained largely unchanged. Most epidemiologic research on suicidal behavior has focused on patterns and correlates of prevalence. The next generation of studies must examine synergistic effects among modifiable risk and protective factors. New studies must incorporate recent advances in survey methods and clinical assessment. Results should be used in ongoing efforts to decrease the significant loss of life caused by suicidal behavior.",
"title": ""
}
] | [
{
"docid": "eb847700cef64d89b88ff57fef9fae4b",
"text": "Software Defined Networking (SDN) is a new programmable network construction technology that enables centrally management and control, which is considered to be the future evolution trend of networks. A modularized carrier-grade SDN controller according to the characteristics of carrier-grade networks is designed and proposed, resolving the problem of controlling large-scale networks of carrier. The modularized architecture offers the system flexibility, scalability and stability. Functional logic of modules and core modules, such as link discovery module and topology module, are designed to meet the carrier's need. Static memory allocation, multi-threads technique and stick-package processing are used to improve the performance of controller, which is C programming language based. Processing logic of the communication mechanism of the controller is introduced, proving that the controller conforms to the OpenFlow specification and has a good interaction with OpenFlow-based switches. A controller cluster management system is used to interact with controllers through the east-west interface in order to manage large-scale networks. Furthermore, the effectiveness and high performance of the work in this paper has been verified by the testing using Cbench testing program. Moreover, the SDN controller we proposed has been running in China Telecom's Cloud Computing Key Laboratory, which showed the good results is achieved.",
"title": ""
},
{
"docid": "7be1f8be2c74c438b1ed1761e157d3a3",
"text": "The feeding behavior and digestive physiology of the sea cucumber, Apostichopus japonicus are not well understood. A better understanding may provide useful information for the development of the aquaculture of this species. In this article the tentacle locomotion, feeding rhythms, ingestion rate (IR), feces production rate (FPR) and digestive enzyme activities were studied in three size groups (small, medium and large) of sea cucumber under a 12h light/12h dark cycle. Frame-by-frame video analysis revealed that all size groups had similar feeding strategies using a grasping motion to pick up sediment particles. The tentacle insertion rates of the large size group were significantly faster than those of the small and medium-sized groups (P<0.05). Feeding activities investigated by charge coupled device cameras with infrared systems indicated that all size groups of sea cucumber were nocturnal and their feeding peaks occurred at 02:00-04:00. The medium and large-sized groups also had a second feeding peak during the day. Both IR and FPR in all groups were significantly higher at night than those during the daytime (P<0.05). Additionally, the peak activities of digestive enzymes were 2-4h earlier than the peak of feeding. Taken together, these results demonstrated that the light/dark cycle was a powerful environment factor that influenced biological rhythms of A. japonicus, which had the ability to optimize the digestive processes for a forthcoming ingestion.",
"title": ""
},
{
"docid": "447c008d30a6f86830d49bd74bd7a551",
"text": "OBJECTIVES\nTo investigate the effects of 24 weeks of whole-body-vibration (WBV) training on knee-extension strength and speed of movement and on counter-movement jump performance in older women.\n\n\nDESIGN\nA randomized, controlled trial.\n\n\nSETTING\nExercise Physiology and Biomechanics Laboratory, Leuven, Belgium.\n\n\nPARTICIPANTS\nEighty-nine postmenopausal women, off hormone replacement therapy, aged 58 to 74, were randomly assigned to a WBV group (n=30), a resistance-training group (RES, n=30), or a control group (n=29).\n\n\nINTERVENTION\nThe WBV group and the RES group trained three times a week for 24 weeks. The WBV group performed unloaded static and dynamic knee-extensor exercises on a vibration platform, which provokes reflexive muscle activity. The RES group trained knee-extensors by performing dynamic leg-press and leg-extension exercises increasing from low (20 repetitions maximum (RM)) to high (8RM) resistance. The control group did not participate in any training.\n\n\nMEASUREMENTS\nPre-, mid- (12 weeks), and post- (24 weeks) isometric strength and dynamic strength of knee extensors were measured using a motor-driven dynamometer. Speed of movement of knee extension was assessed using an external resistance equivalent to 1%, 20%, 40%, and 60% of isometric maximum. Counter-movement jump performance was determined using a contact mat.\n\n\nRESULTS\nIsometric and dynamic knee extensor strength increased significantly (P<.001) in the WBV group (mean+/-standard error 15.0+/-2.1% and 16.1+/-3.1%, respectively) and the RES group (18.4+/-2.8% and 13.9+/-2.7%, respectively) after 24 weeks of training, with the training effects not significantly different between the groups (P=.558). Speed of movement of knee extension significantly increased at low resistance (1% or 20% of isometric maximum) in the WBV group only (7.4+/-1.8% and 6.3+/-2.0%, respectively) after 24 weeks of training, with no significant differences in training effect between the WBV and the RES groups (P=.391; P=.142). Counter-movement jump height enhanced significantly (P<.001) in the WBV group (19.4+/-2.8%) and the RES group (12.9+/-2.9%) after 24 weeks of training. Most of the gain in knee-extension strength and speed of movement and in counter-movement jump performance had been realized after 12 weeks of training.\n\n\nCONCLUSION\nWBV is a suitable training method and is as efficient as conventional RES training to improve knee-extension strength and speed of movement and counter-movement jump performance in older women. As previously shown in young women, it is suggested that the strength gain in older women is mainly due to the vibration stimulus and not only to the unloaded exercises performed on the WBV platform.",
"title": ""
},
{
"docid": "0574f193736e10b13a22da2d9c0ee39a",
"text": "Preliminary communication In food production industry, forecasting the timing of demands is crucial in planning production scheduling to satisfy customer needs on time. In the literature, several statistical models have been used in demand forecasting in Food and Beverage (F&B) industry and the choice of the most suitable forecasting model remains a central concern. In this context, this article aims to compare the performances between Trend Analysis, Decomposition and Holt-Winters (HW) models for the prediction of a time series formed by a group of jam and sherbet product demands. Data comprised the series of monthly sales from January 2013 to December 2014 obtained from a private company. As performance measures, metric analysis of the Mean Absolute Percentage Error (MAPE) is used. In this study, the HW and Decomposition models obtained better results regarding the performance metrics.",
"title": ""
},
{
"docid": "33db7ac45c020d2a9e56227721b0be70",
"text": "This thesis proposes an extended version of the Combinatory Categorial Grammar (CCG) formalism, with the following features: 1. grammars incorporate inheritance hierarchies of lexical types, defined over a simple, feature-based constraint language 2. CCG lexicons are, or at least can be, functions from forms to these lexical types This formalism, which I refer to as ‘inheritance-driven’ CCG (I-CCG), is conceptualised as a partially model-theoretic system, involving a distinction between category descriptions and their underlying category models, with these two notions being related by logical satisfaction. I argue that the I-CCG formalism retains all the advantages of both the core CCG framework and proposed generalisations involving such things as multiset categories, unary modalities or typed feature structures. In addition, I-CCG: 1. provides non-redundant lexicons for human languages 2. captures a range of well-known implicational word order universals in terms of an acquisition-based preference for shorter grammars This thesis proceeds as follows: Chapter 2 introduces the ‘baseline’ CCG formalism, which incorporates just the essential elements of category notation, without any of the proposed extensions. Chapter 3 reviews parts of the CCG literature dealing with linguistic competence in its most general sense, showing how the formalism predicts a number of language universals in terms of either its restricted generative capacity or the prioritisation of simpler lexicons. Chapter 4 analyses the first motivation for generalising the baseline category notation, demonstrating how certain fairly simple implicational word order universals are not formally predicted by baseline CCG, although they intuitively do involve considerations of grammatical economy. Chapter 5 examines the second motivation underlying many of the customised CCG category notations — to reduce lexical redundancy, thus allowing for the construction of lexicons which assign (each sense of) open class words and morphemes to no more than one lexical category, itself denoted by a non-composite lexical type.",
"title": ""
},
{
"docid": "2313822a08269b3dd125190c4874b808",
"text": "General-purpose knowledge bases are increasingly growing in terms of depth (content) and width (coverage). Moreover, algorithms for entity linking and entity retrieval have improved tremendously in the past years. These developments give rise to a new line of research that exploits and combines these developments for the purposes of text-centric information retrieval applications. This tutorial focuses on a) how to retrieve a set of entities for an ad-hoc query, or more broadly, assessing relevance of KB elements for the information need, b) how to annotate text with such elements, and c) how to use this information to assess the relevance of text. We discuss different kinds of information available in a knowledge graph and how to leverage each most effectively.\n We start the tutorial with a brief overview of different types of knowledge bases, their structure and information contained in popular general-purpose and domain-specific knowledge bases. In particular, we focus on the representation of entity-centric information in the knowledge base through names, terms, relations, and type taxonomies. Next, we will provide a recap on ad-hoc object retrieval from knowledge graphs as well as entity linking and retrieval. This is essential technology, which the remainder of the tutorial builds on. Next we will cover essential components within successful entity linking systems, including the collection of entity name information and techniques for disambiguation with contextual entity mentions. We will present the details of four previously proposed systems that successfully leverage knowledge bases to improve ad-hoc document retrieval. These systems combine the notion of entity retrieval and semantic search on one hand, with text retrieval models and entity linking on the other. Finally, we also touch on entity aspects and links in the knowledge graph as it can help to understand the entities' context.\n This tutorial is the first to compile, summarize, and disseminate progress in this emerging area and we provide both an overview of state-of-the-art methods and outline open research problems to encourage new contributions.",
"title": ""
},
{
"docid": "537d6fdfb26e552fb3254addfbb6ac49",
"text": "We propose a unified framework for building unsupervised representations of entities and their compositions, by viewing each entity as a histogram (or distribution) over its contexts. This enables us to take advantage of optimal transport and construct representations that effectively harness the geometry of the underlying space containing the contexts. Our method captures uncertainty via modelling the entities as distributions and simultaneously provides interpretability with the optimal transport map, hence giving a novel perspective for building rich and powerful feature representations. As a guiding example, we formulate unsupervised representations for text, and demonstrate it on tasks such as sentence similarity and word entailment detection. Empirical results show strong advantages gained through the proposed framework. This approach can potentially be used for any unsupervised or supervised problem (on text or other modalities) with a co-occurrence structure, such as any sequence data. The key tools at the core of this framework are Wasserstein distances and Wasserstein barycenters, hence raising the question from our title.",
"title": ""
},
{
"docid": "4f0d34e830387947f807213599d47652",
"text": "An essential feature of large scale free graphs, such as the Web, protein-to-protein interaction, brain connectivity, and social media graphs, is that they tend to form recursive communities. The latter are densely connected vertex clusters exhibiting quick local information dissemination and processing. Under the fuzzy graph model vertices are fixed while each edge exists with a given probability according to a membership function. This paper presents Fuzzy Walktrap and Fuzzy Newman-Girvan, fuzzy versions of two established community discovery algorithms. The proposed algorithms have been applied to a synthetic graph generated by the Kronecker model with different termination criteria and the results are discussed. Keywords-Fuzzy graphs; Membership function; Community detection; Termination criteria; Walktrap algorithm; NewmanGirvan algorithm; Edge density; Kronecker model; Large graph analytics; Higher order data",
"title": ""
},
{
"docid": "ca2e577e819ac49861c65bfe8d26f5a1",
"text": "A design of a delay based self-oscillating class-D power amplifier for piezoelectric actuators is presented and modelled. First order and second order configurations are discussed in detail and analytical results reveal the stability criteria of a second order system, which should be respected in the design. It also shows if the second order system converges, it will tend to give a correct pulse modulation regarding to the input modulation index. Experimental results show the effectiveness of this design procedure. For a piezoelectric load of 400 nF, powered by a 150 V 10 kHz sinusoidal signal, a total harmonic distortion (THD) of 4.3% is obtained.",
"title": ""
},
{
"docid": "dd51cc2138760f1dcdce6e150cabda19",
"text": "Breast cancer is the most common cancer in women worldwide. The most common screening technology is mammography. To reduce the cost and workload of radiologists, we propose a computer aided detection approach for classifying and localizing calcifications and masses in mammogram images. To improve on conventional approaches, we apply deep convolutional neural networks (CNN) for automatic feature learning and classifier building. In computer-aided mammography, deep CNN classifiers cannot be trained directly on full mammogram images because of the loss of image details from resizing at input layers. Instead, our classifiers are trained on labelled image patches and then adapted to work on full mammogram images for localizing the abnormalities. State-of-the-art deep convolutional neural networks are compared on their performance of classifying the abnormalities. Experimental results indicate that VGGNet receives the best overall accuracy at 92.53% in classifications. For localizing abnormalities, ResNet is selected for computing class activation maps because it is ready to be deployed without structural change or further training. Our approach demonstrates that deep convolutional neural network classifiers have remarkable localization capabilities despite no supervision on the location of abnormalities is provided.",
"title": ""
},
{
"docid": "acce5017b1138c67e24e661c1eabc185",
"text": "The main goal of the paper is to continuously enlarge the set of software building blocks that can be reused in the search and rescue domain.",
"title": ""
},
{
"docid": "a8a24c602c5f295495b7dc68c606590d",
"text": "This paper deals with the design of an AC-220-volt-mains-fed power supply for ozone generation. A power stage consisting of a buck converter to regulate the output power plus a current-fed parallel-resonant push-pull inverter to supply an ozone generator (OG) is proposed and analysed. A closed-loop operation is presented as a method to compensate variations in the AC source voltage. Inverter's step-up transformer issues and their effect on the performance of the overall circuit are also studied. The use of a UC3872 integrated circuit is proposed to control both the push-pull inverter and the buck converter, as well as to provide the possibility to protect the power supply in case a short circuit, an open-lamp operation or any other circumstance might occur. Implementation of a 100 W prototype and experimental results are shown and discussed.",
"title": ""
},
{
"docid": "93ed81d5244715aaaf402817aa674310",
"text": "Automatically recognized terminology is widely used for various domain-specific texts processing tasks, such as machine translation, information retrieval or ontology construction. However, there is still no agreement on which methods are best suited for particular settings and, moreover, there is no reliable comparison of already developed methods. We believe that one of the main reasons is the lack of state-of-the-art methods implementations, which are usually non-trivial to recreate. In order to address these issues, we present ATR4S, an open-source software written in Scala that comprises more than 15 methods for automatic terminology recognition (ATR) and implements the whole pipeline from text document preprocessing, to term candidates collection, term candidates scoring, and finally, term candidates ranking. It is highly scalable, modular and configurable tool with support of automatic caching. We also compare 13 state-of-the-art methods on 7 open datasets by average precision and processing time. Experimental comparison reveals that no single method demonstrates best average precision for all datasets and that other available tools for ATR do not contain the best methods.",
"title": ""
},
{
"docid": "40cf1e5ecb0e79f466c65f8eaff77cb2",
"text": "Spiral patterns on the surface of a sphere have been seen in laboratory experiments and in numerical simulations of reaction–diffusion equations and convection. We classify the possible symmetries of spirals on spheres, which are quite different from the planar case since spirals typically have tips at opposite points on the sphere. We concentrate on the case where the system has an additional sign-change symmetry, in which case the resulting spiral patterns do not rotate. Spiral patterns arise through a mode interaction between spherical harmonics degree l and l+1. Using the methods of equivariant bifurcation theory, possible symmetry types are determined for each l. For small values of l, the centre manifold equations are constructed and spiral solutions are found explicitly. Bifurcation diagrams are obtained showing how spiral states can appear at secondary bifurcations from primary solutions, or tertiary bifurcations. The results are consistent with numerical simulations of a model pattern-forming system.",
"title": ""
},
{
"docid": "a354b6c03cadf539ccd01a247447ebc1",
"text": "In the present study, we tested in vitro different parts of 35 plants used by tribals of the Similipal Biosphere Reserve (SBR, Mayurbhanj district, India) for the management of infections. From each plant, three extracts were prepared with different solvents (water, ethanol, and acetone) and tested for antimicrobial (E. coli, S. aureus, C. albicans); anthelmintic (C. elegans); and antiviral (enterovirus 71) bioactivity. In total, 35 plant species belonging to 21 families were recorded from tribes of the SBR and periphery. Of the 35 plants, eight plants (23%) showed broad-spectrum in vitro antimicrobial activity (inhibiting all three test strains), while 12 (34%) exhibited narrow spectrum activity against individual pathogens (seven as anti-staphylococcal and five as anti-candidal). Plants such as Alangium salviifolium, Antidesma bunius, Bauhinia racemosa, Careya arborea, Caseria graveolens, Cleistanthus patulus, Colebrookea oppositifolia, Crotalaria pallida, Croton roxburghii, Holarrhena pubescens, Hypericum gaitii, Macaranga peltata, Protium serratum, Rubus ellipticus, and Suregada multiflora showed strong antibacterial effects, whilst Alstonia scholaris, Butea monosperma, C. arborea, C. pallida, Diospyros malbarica, Gmelina arborea, H. pubescens, M. peltata, P. serratum, Pterospermum acerifolium, R. ellipticus, and S. multiflora demonstrated strong antifungal activity. Plants such as A. salviifolium, A. bunius, Aporosa octandra, Barringtonia acutangula, C. graveolens, C. pallida, C. patulus, G. arborea, H. pubescens, H. gaitii, Lannea coromandelica, M. peltata, Melastoma malabathricum, Millettia extensa, Nyctanthes arbor-tristis, P. serratum, P. acerifolium, R. ellipticus, S. multiflora, Symplocos cochinchinensis, Ventilago maderaspatana, and Wrightia arborea inhibit survival of C. elegans and could be a potential source for anthelmintic activity. Additionally, plants such as A. bunius, C. graveolens, C. patulus, C. oppositifolia, H. gaitii, M. extensa, P. serratum, R. ellipticus, and V. maderaspatana showed anti-enteroviral activity. Most of the plants, whose traditional use as anti-infective agents by the tribals was well supported, show in vitro inhibitory activity against an enterovirus, bacteria (E. coil, S. aureus), a fungus (C. albicans), or a nematode (C. elegans).",
"title": ""
},
{
"docid": "30c6829427aaa8d23989afcd666372f7",
"text": "We developed an optimizing compiler for intrusion detection rules popularized by an open-source Snort Network Intrusion Detection System (www.snort.org). While Snort and Snort-like rules are usually thought of as a list of independent patterns to be tested in a sequential order, we demonstrate that common compilation techniques are directly applicable to Snort rule sets and are able to produce high-performance matching engines. SNORTRAN combines several compilation techniques, including cost-optimized decision trees, pattern matching precompilation, and string set clustering. Although all these techniques have been used before in other domain-specific languages, we believe their synthesis in SNORTRAN is original and unique. Introduction Snort [RO99] is a popular open-source Network Intrusion Detection System (NIDS). Snort is controlled by a set of pattern/action rules residing in a configuration file of a specific format. Due to Snort’s popularity, Snort-like rules are accepted by several other NIDS [FSTM, HANK]. In this paper we describe an optimizing compiler for Snort rule sets called SNORTRAN that incorporates ideas of pattern matching compilation based on cost-optimized decision trees [DKP92, KS88] with setwise string search algorithms popularized by recent research in highperformance NIDS detection engines [FV01, CC01, GJMP]. The two main design goals were performance and compatibility with the original Snort rule interpreter. The primary application area for NIDS is monitoring IP traffic inside and outside of firewalls, looking for unusual activities that can be attributed to external attacks or internal misuse. Most NIDS are designed to handle T1/partial T3 traffic, but as the number of the known vulnerabilities grows and more and more weight is given to internal misuse monitoring on high-throughput networks (100Mbps/1Gbps), it gets harder to keep up with the traffic without dropping too many packets to make detection ineffective. Throwing hardware at the problem is not always possible because of growing maintenance and support costs, let alone the fact that the problem of making multi-unit system work in realistic environment is as hard as the original performance problem. Bottlenecks of the detection process were identified by many researchers and practitioners [FV01, ND02, GJMP], and several approaches were proposed [FV01, CC01]. Our benchmarking supported the performance analysis made by M. Fisk and G. Varghese [FV01], adding some interesting findings on worst-case behavior of setwise string search algorithms in practice. Traditionally, NIDS are designed around a packet grabber (system-specific or libcap) getting traffic packets off the wire, combined with preprocessors, packet decoders, and a detection engine looking for a static set of signatures loaded from a rule file at system startup. Snort [SNORT] and",
"title": ""
},
{
"docid": "5ce00014f84277aca0a4b7dfefc01cbb",
"text": "The design of a planar dual-band wide-scan phased array is presented. The array uses novel dual-band comb-slot-loaded patch elements supporting two separate bands with a frequency ratio of 1.4:1. The antenna maintains consistent radiation patterns and incorporates a feeding configuration providing good bandwidths in both bands. The design has been experimentally validated with an X-band planar 9 × 9 array. The array supports wide-angle scanning up to a maximum of 60 ° and 50 ° at the low and high frequency bands respectively.",
"title": ""
},
{
"docid": "bd62496839434c34bcf876a581d38c37",
"text": "We present results from an experiment similar to one performed by Packard [24], in which a genetic algorithm is used to evolve cellular automata (CA) to perform a particular computational task. Packard examined the frequency of evolved CA rules as a function of Langton’s λ parameter [17], and interpreted the results of his experiment as giving evidence for the following two hypotheses: (1) CA rules able to perform complex computations are most likely to be found near “critical” λ values, which have been claimed to correlate with a phase transition between ordered and chaotic behavioral regimes for CA; (2) When CA rules are evolved to perform a complex computation, evolution will tend to select rules with λ values close to the critical values. Our experiment produced very different results, and we suggest that the interpretation of the original results is not correct. We also review and discuss issues related to λ, dynamical-behavior classes, and computation in CA. The main constructive results of our study are identifying the emergence and competition of computational strategies and analyzing the central role of symmetries in an evolutionary system. In particular, we demonstrate how symmetry breaking can impede the evolution toward higher computational capability. Santa Fe Institute, 1660 Old Pecos Trail, Suite A, Santa Fe, New Mexico, U.S.A. 87501. Email: mm@santafe.edu, pth@santafe.edu Physics Department, University of California, Berkeley, CA, U.S.A. 94720. Email: chaos@gojira.berkeley.edu",
"title": ""
},
{
"docid": "c302699cb7dec9f813117bfe62d3b5fb",
"text": "Pipe networks constitute the means of transporting fluids widely used nowadays. Increasing the operational reliability of these systems is crucial to minimize the risk of leaks, which can cause serious pollution problems to the environment and have disastrous consequences if the leak occurs near residential areas. Considering the importance in developing efficient systems for detecting leaks in pipelines, this work aims to detect the characteristic frequencies (predominant) in case of leakage and no leakage. The methodology consisted of capturing the experimental data through a microphone installed inside the pipeline and coupled to a data acquisition card and a computer. The Fast Fourier Transform (FFT) was used as the mathematical approach to the signal analysis from the microphone, generating a frequency response (spectrum) which reveals the characteristic frequencies for each operating situation. The tests were carried out using distinct sizes of leaks, situations without leaks and cases with blows in the pipe caused by metal instruments. From the leakage tests, characteristic peaks were found in the FFT frequency spectrum using the signal generated by the microphone. Such peaks were not observed in situations with no leaks. Therewith, it was realized that it was possible to distinguish, through spectral analysis, an event of leakage from an event without leakage.",
"title": ""
},
{
"docid": "d9fe0834ccf80bddadc5927a8199cd2c",
"text": "Deep Residual Networks (ResNets) have recently achieved state-of-the-art results on many challenging computer vision tasks. In this work we analyze the role of Batch Normalization (BatchNorm) layers on ResNets in the hope of improving the current architecture and better incorporating other normalization techniques, such as Normalization Propagation (NormProp), into ResNets. Firstly, we verify that BatchNorm helps distribute representation learning to residual blocks at all layers, as opposed to a plain ResNet without BatchNorm where learning happens mostly in the latter part of the network. We also observe that BatchNorm well regularizes Concatenated ReLU (CReLU) activation scheme on ResNets, whose magnitude of activation grows by preserving both positive and negative responses when going deeper into the network. Secondly, we investigate the use of NormProp as a replacement for BatchNorm in ResNets. Though NormProp theoretically attains the same effect as BatchNorm on generic convolutional neural networks, the identity mapping of ResNets invalidates its theoretical promise and NormProp exhibits a significant performance drop when naively applied. To bridge the gap between BatchNorm and NormProp in ResNets, we propose a simple modification to NormProp and employ the CReLU activation scheme. We experiment on visual object recognition benchmark datasets such as CIFAR10/100 and ImageNet and demonstrate that 1) the modified NormProp performs better than the original NormProp but is still not comparable to BatchNorm and 2) CReLU improves the performance of ResNets with or without normalizations.",
"title": ""
}
] | scidocsrr |
ef9f48caaba38c29329650121b2ef6c8 | Predictive role of prenasal thickness and nasal bone for Down syndrome in the second trimester. | [
{
"docid": "e7315716a56ffa7ef2461c7c99879efb",
"text": "OBJECTIVE\nTo investigate the potential value of ultrasound examination of the fetal profile for present/hypoplastic fetal nasal bone at 15-22 weeks' gestation as a marker for trisomy 21.\n\n\nMETHODS\nThis was an observational ultrasound study in 1046 singleton pregnancies undergoing amniocentesis for fetal karyotyping at 15-22 (median, 17) weeks' gestation. Immediately before amniocentesis the fetal profile was examined to determine if the nasal bone was present or hypoplastic (absent or shorter than 2.5 mm). The incidence of nasal hypoplasia in the trisomy 21 and the chromosomally normal fetuses was determined and the likelihood ratio for trisomy 21 for nasal hypoplasia was calculated.\n\n\nRESULTS\nAll fetuses were successfully examined for the presence of the nasal bone. The nasal bone was hypoplastic in 21/34 (61.8%) fetuses with trisomy 21, in 12/982 (1.2%) chromosomally normal fetuses and in 1/30 (3.3%) fetuses with other chromosomal defects. In 3/21 (14.3%) trisomy 21 fetuses with nasal hypoplasia there were no other abnormal ultrasound findings. In the chromosomally normal group hypoplastic nasal bone was found in 0.5% of Caucasians and in 8.8% of Afro-Caribbeans. The likelihood ratio for trisomy 21 for hypoplastic nasal bone was 50.5 (95% CI 27.1-92.7) and for present nasal bone it was 0.38 (95% CI 0.24-0.56).\n\n\nCONCLUSION\nNasal bone hypoplasia at the 15-22-week scan is associated with a high risk for trisomy 21 and it is a highly sensitive and specific marker for this chromosomal abnormality.",
"title": ""
}
] | [
{
"docid": "2adf5e06cfc7e6d8cf580bdada485a23",
"text": "This paper describes the comprehensive Terrorism Knowledge Base TM (TKB TM) which will ultimately contain all relevant knowledge about terrorist groups, their members, leaders, affiliations , etc., and full descriptions of specific terrorist events. Led by world-class experts in terrorism , knowledge enterers have, with simple tools, been building the TKB at the rate of up to 100 assertions per person-hour. The knowledge is stored in a manner suitable for computer understanding and reasoning. The TKB also utilizes its reasoning modules to integrate data and correlate observations, generate scenarios, answer questions and compose explanations.",
"title": ""
},
{
"docid": "87133250a9e04fd42f5da5ecacd39d70",
"text": "Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.",
"title": ""
},
{
"docid": "cd0c1507c1187e686c7641388413d3b5",
"text": "Inference of three-dimensional motion from the fusion of inertial and visual sensory data has to contend with the preponderance of outliers in the latter. Robust filtering deals with the joint inference and classification task of selecting which data fits the model, and estimating its state. We derive the optimal discriminant and propose several approximations, some used in the literature, others new. We compare them analytically, by pointing to the assumptions underlying their approximations, and empirically. We show that the best performing method improves the performance of state-of-the-art visual-inertial sensor fusion systems, while retaining the same computational complexity.",
"title": ""
},
{
"docid": "7e683f15580e77b1e207731bb73b8107",
"text": "The skeleton is essential for general shape representation. The commonly required properties of a skeletonization algorithm are that the extracted skeleton should be accurate; robust to noise, position and rotation; able to reconstruct the original object; and able to produce a connected skeleton in order to preserve its topological and hierarchical properties. However, the use of a discrete image presents a lot of problems that may in9uence the extraction of the skeleton. Moreover, most of the methods are memory-intensive and computationally intensive, and require a complex data structure. In this paper, we propose a fast, e;cient and accurate skeletonization method for the extraction of a well-connected Euclidean skeleton based on a signed sequential Euclidean distance map. A connectivity criterion is proposed, which can be used to determine whether a given pixel is a skeleton point independently. The criterion is based on a set of point pairs along the object boundary, which are the nearest contour points to the pixel under consideration and its 8 neighbors. Our proposed method generates a connected Euclidean skeleton with a single pixel width without requiring a linking algorithm or iteration process. Experiments show that the runtime of our algorithm is faster than the distance transformation and is linearly proportional to the number of pixels of an image. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f2b6afabd67354280d091d11e8265b96",
"text": "This paper aims to present three new methods for color detection and segmentation of road signs. The images are taken by a digital camera mounted in a car. The RGB images are converted into IHLS color space, and new methods are applied to extract the colors of the road signs under consideration. The methods are tested on hundreds of outdoor images in different light conditions, and they show high robustness. This project is part of the research taking place in Dalarna University/Sweden in the field of the ITS",
"title": ""
},
{
"docid": "8f289714182c490b726b8edbbb672cfd",
"text": "Design and implementation of a 15kV sub-nanosecond pulse generator using Trigatron type spark gap as a switch. Straightforward and compact trigger generator using pulse shaping network which produces a trigger pulse of sub-nanosecond rise time. A pulse power system requires delivering a high voltage, high coulomb in short rise time. This is achieved by using pulse shaping network comprises of parallel combinations of capacitors and inductor. Spark gap switches are used to switch the energy from capacitive source to inductive load. The pulse hence generated can be used for synchronization of two or more spark gap. Because of the fast rise time and the high output voltage, the reliability of the synchronization is increased. The analytical calculations, simulation, have been carried out to select the circuit parameters. Simulation results using MATLAB/SIMULINK have been implemented in the experimental setup and sub-nanoseconds output waveforms have been obtained.",
"title": ""
},
{
"docid": "874b14b3c3e15b43de3310327affebaf",
"text": "We present the Accelerated Quadratic Proxy (AQP) - a simple first-order algorithm for the optimization of geometric energies defined over triangular and tetrahedral meshes.\n The main stumbling block of current optimization techniques used to minimize geometric energies over meshes is slow convergence due to ill-conditioning of the energies at their minima. We observe that this ill-conditioning is in large part due to a Laplacian-like term existing in these energies. Consequently, we suggest to locally use a quadratic polynomial proxy, whose Hessian is taken to be the Laplacian, in order to achieve a preconditioning effect. This already improves stability and convergence, but more importantly allows incorporating acceleration in an almost universal way, that is independent of mesh size and of the specific energy considered.\n Experiments with AQP show it is rather insensitive to mesh resolution and requires a nearly constant number of iterations to converge; this is in strong contrast to other popular optimization techniques used today such as Accelerated Gradient Descent and Quasi-Newton methods, e.g., L-BFGS. We have tested AQP for mesh deformation in 2D and 3D as well as for surface parameterization, and found it to provide a considerable speedup over common baseline techniques.",
"title": ""
},
{
"docid": "c7ea816f2bb838b8c5aac3cdbbd82360",
"text": "Semantic annotated parallel corpora, though rare, play an increasingly important role in natural language processing. These corpora provide valuable data for computational tasks like sense-based machine translation and word sense disambiguation, but also to contrastive linguistics and translation studies. In this paper we present the ongoing development of a web-based corpus semantic annotation environment that uses the Open Multilingual Wordnet (Bond and Foster, 2013) as a sense inventory. The system includes interfaces to help coordinating the annotation project and a corpus browsing interface designed specifically to meet the needs of a semantically annotated corpus. The tool was designed to build the NTU-Multilingual Corpus (Tan and Bond, 2012). For the past six years, our tools have been tested and developed in parallel with the semantic annotation of a portion of this corpus in Chinese, English, Japanese and Indonesian. The annotation system is released under an open source license (MIT).",
"title": ""
},
{
"docid": "933312292c64c916e69357c5aec42189",
"text": "Augmented reality annotations and virtual scene navigation add new dimensions to remote collaboration. In this paper, we present a touchscreen interface for creating freehand drawings as world-stabilized annotations and for virtually navigating a scene reconstructed live in 3D, all in the context of live remote collaboration. Two main focuses of this work are (1) automatically inferring depth for 2D drawings in 3D space, for which we evaluate four possible alternatives, and (2) gesture-based virtual navigation designed specifically to incorporate constraints arising from partially modeled remote scenes. We evaluate these elements via qualitative user studies, which in addition provide insights regarding the design of individual visual feedback elements and the need to visualize the direction of drawings.",
"title": ""
},
{
"docid": "4a043a02f3fad07797245b0a2c4ea4c5",
"text": "The worldwide population of people over the age of 65 has been predicted to more than double from 1990 to 2025. Therefore, ubiquitous health-care systems have become an important topic of research in recent years. In this paper, an integrated system for portable electrocardiography (ECG) monitoring, with an on-board processor for time–frequency analysis of heart rate variability (HRV), is presented. The main function of proposed system comprises three parts, namely, an analog-to-digital converter (ADC) controller, an HRV processor, and a lossless compression engine. At the beginning, ECG data acquired from front-end circuits through the ADC controller is passed through the HRV processor for analysis. Next, the HRV processor performs real-time analysis of time–frequency HRV using the Lomb periodogram and a sliding window configuration. The Lomb periodogram is suited for spectral analysis of unevenly sampled data and has been applied to time–frequency analysis of HRV in the proposed system. Finally, the ECG data are compressed by 2.5 times using the lossless compression engine before output using universal asynchronous receiver/transmitter (UART). Bluetooth is employed to transmit analyzed HRV data and raw ECG data to a remote station for display or further analysis. The integrated ECG health-care system design proposed has been implemented using UMC 90 nm CMOS technology. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6eb229b17a4634183818ff4a15f981b6",
"text": "Fine-grained image classification is a challenging task due to the large intra-class variance and small inter-class variance, aiming at recognizing hundreds of sub-categories belonging to the same basic-level category. Most existing fine-grained image classification methods generally learn part detection models to obtain the semantic parts for better classification accuracy. Despite achieving promising results, these methods mainly have two limitations: (1) not all the parts which obtained through the part detection models are beneficial and indispensable for classification, and (2) fine-grained image classification requires more detailed visual descriptions which could not be provided by the part locations or attribute annotations. For addressing the above two limitations, this paper proposes the two-stream model combing vision and language (CVL) for learning latent semantic representations. The vision stream learns deep representations from the original visual information via deep convolutional neural network. The language stream utilizes the natural language descriptions which could point out the discriminative parts or characteristics for each image, and provides a flexible and compact way of encoding the salient visual aspects for distinguishing sub-categories. Since the two streams are complementary, combing the two streams can further achieves better classification accuracy. Comparing with 12 state-of-the-art methods on the widely used CUB-200-2011 dataset for fine-grained image classification, the experimental results demonstrate our CVL approach achieves the best performance.",
"title": ""
},
{
"docid": "06675c4b42683181cecce7558964c6b6",
"text": "We present in this work an economic analysis of ransomware, with relevant data from Cryptolocker, CryptoWall, TeslaCrypt and other major strands. We include a detailed study of the impact that different price discrimination strategies can have on the success of a ransomware family, examining uniform pricing, optimal price discrimination and bargaining strategies and analysing their advantages and limitations. In addition, we present results of a preliminary survey that can helps in estimating an optimal ransom value. We discuss at each stage whether the different schemes we analyse have been encountered already in existing malware, and the likelihood of them being implemented and becoming successful. We hope this work will help to gain some useful insights for predicting how ransomware may evolve in the future and be better prepared to counter its current and future threat.",
"title": ""
},
{
"docid": "0d9057d8a40eb8faa7e67128a7d24565",
"text": "We develop efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance. Our methods apply to distributionally robust optimization problems proposed by Ben-Tal et al., which put more weight on observations inducing high loss via a worst-case approach over a non-parametric uncertainty set on the underlying data distribution. Our algorithm solves the resulting minimax problems with nearly the same computational cost of stochastic gradient descent through the use of several carefully designed data structures. For a sample of size n, the per-iteration cost of our method scales as O(log n), which allows us to give optimality certificates that distributionally robust optimization provides at little extra cost compared to empirical risk minimization and stochastic gradient methods.",
"title": ""
},
{
"docid": "c0b30475f78acefae1c15f9f5d6dc57b",
"text": "Traditionally, autonomous cars make predictions about other drivers’ future trajectories, and plan to stay out of their way. This tends to result in defensive and opaque behaviors. Our key insight is that an autonomous car’s actions will actually affect what other cars will do in response, whether the car is aware of it or not. Our thesis is that we can leverage these responses to plan more efficient and communicative behaviors. We model the interaction between an autonomous car and a human driver as a dynamical system, in which the robot’s actions have immediate consequences on the state of the car, but also on human actions. We model these consequences by approximating the human as an optimal planner, with a reward function that we acquire through Inverse Reinforcement Learning. When the robot plans with this reward function in this dynamical system, it comes up with actions that purposefully change human state: it merges in front of a human to get them to slow down or to reach its own goal faster; it blocks two lanes to get them to switch to a third lane; or it backs up slightly at an intersection to get them to proceed first. Such behaviors arise from the optimization, without relying on hand-coded signaling strategies and without ever explicitly modeling communication. Our user study results suggest that the robot is indeed capable of eliciting desired changes in human state by planning using this dynamical system.",
"title": ""
},
{
"docid": "898ff77dbfaf00efa3b08779a781aa0b",
"text": "The monumental cost of health care, especially for chronic disease treatment, is quickly becoming unmanageable. This crisis has motivated the drive towards preventative medicine, where the primary concern is recognizing disease risk and taking action at the earliest signs. However, universal testing is neither time nor cost efficient. We propose CARE, a Collaborative Assessment and Recommendation Engine, which relies only on a patient's medical history using ICD-9-CM codes in order to predict future diseases risks. CARE uses collaborative filtering to predict each patient's greatest disease risks based on their own medical history and that of similar patients. We also describe an Iterative version, ICARE, which incorporates ensemble concepts for improved performance. These novel systems require no specialized information and provide predictions for medical conditions of all kinds in a single run. We present experimental results on a Medicare dataset, demonstrating that CARE and ICARE perform well at capturing future disease risks.",
"title": ""
},
{
"docid": "bf4b6cd15c0b3ddb5892f1baea9dec68",
"text": "The purpose of this study was to examine the distribution, abundance and characteristics of plastic particles in plankton samples collected routinely in Northeast Pacific ecosystems, and to contribute to the development of ideas for future research into the occurrence and impact of small plastic debris in marine pelagic ecosystems. Plastic debris particles were assessed from zooplankton samples collected as part of the National Oceanic and Atmospheric Administration's (NOAA) ongoing ecosystem surveys during two research cruises in the Southeast Bering Sea in the spring and fall of 2006 and four research cruises off the U.S. west coast (primarily off southern California) in spring, summer and fall of 2006, and in January of 2007. Nets with 0.505 mm mesh were used to collect surface samples during all cruises, and sub-surface samples during the four cruises off the west coast. The 595 plankton samples processed indicate that plastic particles are widely distributed in surface waters. The proportion of surface samples from each cruise that contained particles of plastic ranged from 8.75 to 84.0%, whereas particles were recorded in sub-surface samples from only one cruise (in 28.2% of the January 2007 samples). Spatial and temporal variability was apparent in the abundance and distribution of the plastic particles and mean standardized quantities varied among cruises with ranges of 0.004-0.19 particles/m³, and 0.014-0.209 mg dry mass/m³. Off southern California, quantities for the winter cruise were significantly higher, and for the spring cruise significantly lower than for the summer and fall surveys (surface data). Differences between surface particle concentrations and mass for the Bering Sea and California coast surveys were significant for pair-wise comparisons of the spring but not the fall cruises. The particles were assigned to three plastic product types: product fragments, fishing net and line fibers, and industrial pellets; and five size categories: <1 mm, 1-2.5 mm, >2.5-5 mm, >5-10 mm, and >10 mm. Product fragments accounted for the majority of the particles, and most were less than 2.5 mm in size. The ubiquity of such particles in the survey areas and predominance of sizes <2.5 mm implies persistence in these pelagic ecosystems as a result of continuous breakdown from larger plastic debris fragments, and widespread distribution by ocean currents. Detailed investigations of the trophic ecology of individual zooplankton species, and their encounter rates with various size ranges of plastic particles in the marine pelagic environment, are required in order to understand the potential for ingestion of such debris particles by these organisms. Ongoing plankton sampling programs by marine research institutes in large marine ecosystems are good potential sources of data for continued assessment of the abundance, distribution and potential impact of small plastic debris in productive coastal pelagic zones.",
"title": ""
},
{
"docid": "0fe02fcc6f68ba1563d3f5d96a8da330",
"text": "We present a novel technique for jointly predicting semantic arguments for lexical predicates. The task is to find the best matching between semantic roles and sentential spans, subject to structural constraints that come from expert linguistic knowledge (e.g., in the FrameNet lexicon). We formulate this task as an integer linear program (ILP); instead of using an off-the-shelf tool to solve the ILP, we employ a dual decomposition algorithm, which we adapt for exact decoding via a branch-and-bound technique. Compared to a baseline that makes local predictions, we achieve better argument identification scores and avoid all structural violations. Runtime is nine times faster than a proprietary ILP solver.",
"title": ""
},
{
"docid": "e1b6cc1dbd518760c414cd2ddbe88dd5",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Mind the Traps! Design Guidelines for Rigorous BCI Experiments Camille Jeunet, Stefan Debener, Fabien Lotte, Jeremie Mattout, Reinhold Scherer, Catharina Zich",
"title": ""
},
{
"docid": "8cbe0ff905a58e575f2d84e4e663a857",
"text": "Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. is survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Specically, we list and review the dierent protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-ings (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.",
"title": ""
}
] | scidocsrr |
b333dcddc559ebcf28b6f58e4124b6fa | Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds | [
{
"docid": "634b30b81da7139082927109b4c22d5e",
"text": "Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be “unrolled” to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50× faster than BM3D-AMP and hundreds of times faster than NLR-CS.",
"title": ""
},
{
"docid": "59786d8ea951639b8b9a4e60c9d43a06",
"text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.",
"title": ""
}
] | [
{
"docid": "489aa160c450539b50c63c6c3c6993ab",
"text": "Adequacy of citations is very important for a scientific paper. However, it is not an easy job to find appropriate citations for a given context, especially for citations in different languages. In this paper, we define a novel task of cross-language context-aware citation recommendation, which aims at recommending English citations for a given context of the place where a citation is made in a Chinese paper. This task is very challenging because the contexts and citations are written in different languages and there exists a language gap when matching them. To tackle this problem, we propose the bilingual context-citation embedding algorithm (i.e. BLSRec-I), which can learn a low-dimensional joint embedding space for both contexts and citations. Moreover, two advanced algorithms named BLSRec-II and BLSRec-III are proposed by enhancing BLSRec-I with translation results and abstract information, respectively. We evaluate the proposed methods based on a real dataset that contains Chinese contexts and English citations. The results demonstrate that our proposed algorithms can outperform a few baselines and the BLSRec-II and BLSRec-III methods can outperform the BLSRec-I method.",
"title": ""
},
{
"docid": "5e7a87078f92b7ce145e24a2e7340f1b",
"text": "Unsupervised artificial neural networks are now considered as a likely alternative to classical computing models in many application domains. For example, recent neural models defined by neuro-scientists exhibit interesting properties for an execution in embedded and autonomous systems: distributed computing, unsupervised learning, self-adaptation, self-organisation, tolerance. But these properties only emerge from large scale and fully connected neural maps that result in intensive computation coupled with high synaptic communications. We are interested in deploying these powerful models in the embedded context of an autonomous bio-inspired robot learning its environment in realtime. So we study in this paper in what extent these complex models can be simplified and deployed in hardware accelerators compatible with an embedded integration. Thus we propose a Neural Processing Unit designed as a programmable accelerator implementing recent equations close to self-organizing maps and neural fields. The proposed architecture is validated on FPGA devices and compared to state of the art solutions. The trade-off proposed by this dedicated but programmable neural processing unit allows to achieve significant improvements and makes our architecture adapted to many embedded systems.",
"title": ""
},
{
"docid": "014759efa636aec38aa35287b61e44a4",
"text": "Outlier detection is an important topic in machine learning and has been used in a wide range of applications. In this paper, we approach outlier detection as a binary-classification issue by sampling potential outliers from a uniform reference distribution. However, due to the sparsity of data in high-dimensional space, a limited number of potential outliers may fail to provide sufficient information to assist the classifier in describing a boundary that can separate outliers from normal data effectively. To address this, we propose a novel Single-Objective Generative Adversarial Active Learning (SO-GAAL) method for outlier detection, which can directly generate informative potential outliers based on the mini-max game between a generator and a discriminator. Moreover, to prevent the generator from falling into the mode collapsing problem, the stop node of training should be determined when SO-GAAL is able to provide sufficient information. But without any prior information, it is extremely difficult for SO-GAAL. Therefore, we expand the network structure of SO-GAAL from a single generator to multiple generators with different objectives (MO-GAAL), which can generate a reasonable reference distribution for the whole dataset. We empirically compare the proposed approach with several state-of-the-art outlier detection methods on both synthetic and real-world datasets. The results show that MO-GAAL outperforms its competitors in the majority of cases, especially for datasets with various cluster types or high irrelevant variable ratio. The experiment codes are available at: https://github.com/leibinghe/GAAL-based-outlier-detection",
"title": ""
},
{
"docid": "8476c0832f62e061cf2e63f61e59abf0",
"text": "OBJECTIVE\nThis study examined the effectiveness of using a weighted vest for increasing attention to a fine motor task and decreasing self-stimulatory behaviors in preschool children with pervasive developmental disorders (PDD).\n\n\nMETHOD\nUsing an ABA single-subject design, the duration of attention to task and self-stimulatory behaviors and the number of distractions were measured in five preschool children with PDD over a period of 6 weeks.\n\n\nRESULTS\nDuring the intervention phase, all participants displayed a decrease in the number of distractions and an increase in the duration of focused attention while wearing the weighted vest. All but 1 participant demonstrated a decrease in the duration of self-stimulatory behaviors while wearing a weighted vest; however, the type of self-stimulatory behaviors changed and became less self-abusive for this child while she wore the vest. During the intervention withdrawal phase, 3 participants experienced an increase in the duration of self-stimulatory behaviors, and all participants experienced an increase in the number of distractions and a decrease in the duration of focused attention. The increase or decrease, however, never returned to baseline levels for these behaviors.\n\n\nCONCLUSION\nThe findings suggest that for these 5 children with PDD, the use of a weighted vest resulted in an increase in attention to task and decrease in self-stimulatory behaviors. The most consistent improvement observed was the decreased number of distractions. Additional research is necessary to build consensus about the effectiveness of wearing a weighted vest to increase attention to task and decrease self-stimulatory behaviors for children with PDD.",
"title": ""
},
{
"docid": "b9ec6867c23e5e5ecf53a4159872747c",
"text": "Competition in the wireless telecommunications industry is rampant. To maintain profitability, wireless carriers must control churn, the loss of subscribers who switch from one carrier to another. We explore statistical techniques for churn prediction and, based on these predictions, an optimal policy for identifying customers to whom incentives should be offered to increase retention. Our experiments are based on a data base of nearly 47,000 U.S. domestic subscribers, and includes information about their usage, billing, credit, application, and complaint history. We show that under a wide variety of assumptions concerning the cost of intervention and the retention rate resulting from intervention, churn prediction and remediation can yield significant savings to a carrier. We also show the importance of a data representation crafted by domain experts. Competition in the wireless telecommunications industry is rampant. As many as seven competing carriers operate in each market. The industry is extremely dynamic, with new services, technologies, and carriers constantly altering the landscape. Carriers announce new rates and incentives weekly, hoping to entice new subscribers and to lure subscribers away from the competition. The extent of rivalry is reflected in the deluge of advertisements for wireless service in the daily newspaper and other mass media. The United States had 69 million wireless subscribers in 1998, roughly 25% of the population. Some markets are further developed; for example, the subscription rate in Finland is 53%. Industry forecasts are for a U.S. penetration rate of 48% by 2003. Although there is significant room for growth in most markets, the industry growth rate is declining and competition is rising. Consequently, it has become crucial for wireless carriers to control churn—the loss of customers who switch from one carrier to another. At present, domestic monthly churn rates are 2-3% of the customer base. At an average cost of $400 to acquire a subscriber, churn cost the industry nearly $6.3 billion in 1998; the total annual loss rose to nearly $9.6 billion when lost monthly revenue from subscriber cancellations is considered (Luna, 1998). It costs roughly five times as much to sign on a new subscriber as to retain an existing one. Consequently, for a carrier with 1.5 million subscribers, reducing the monthly churn rate from 2% to 1% would yield an increase in annual earnings of at least $54 million, and an increase in shareholder value of approximately $150 million. (Estimates are even higher when lost monthly revenue is considered; see Fowlkes, Madan, Andrew, & Jensen, 1999; Luna, 1998.) The goal of our research is to evaluate the benefits of predicting churn using techniques from statistical machine learning. We designed models that predict the probability Mozer, M. C., Wolniewicz, R., Grimes, D. B., Johnson, E., & Kaushansky, H. (2000). Churn reduction in the wireless industry. In S. A. Solla, T. K. Leen, & K.-R. Mueller (Eds.), Advances in Neural Information Processing Systems 12 (pp. 935941). Cambridge, MA: MIT Press. of a subscriber churning within a short time window, and we evaluated how well these predictions could be used for decision making by estimating potential cost savings to the wireless carrier under a variety of assumptions concerning subscriber behavior.",
"title": ""
},
{
"docid": "850854aeae187ffdd74c56135d9a4d5b",
"text": "Dynamic interactive maps with transparent but powerful human interface capabilities are beginning to emerge for a variety of geographical information systems, including ones situated on portables for travelers, students, business and service people, and others working in field settings. In the present research, interfaces supporting spoken, pen-based, and multimodal input were analyze for their potential effectiveness in interacting with this new generation of map systems. Input modality (speech, writing, multimodal) and map display format (highly versus minimally structured) were varied in a within-subject factorial design as people completed realistic tasks with a simulated map system. The results identified a constellation of performance difficulties associated with speech-only map interactions, including elevated performance errors, spontaneous disfluencies, and lengthier task completion t ime-problems that declined substantially when people could interact multimodally with the map. These performance advantages also mirrored a strong user preference to interact multimodally. The error-proneness and unacceptability of speech-only input to maps was attributed in large part to people's difficulty generating spoken descriptions of spatial location. Analyses also indicated that map display format can be used to minimize performance errors and disfluencies, and map interfaces that guide users' speech toward brevity can nearly eliminate disfiuencies. Implications of this research are discussed for the design of high-performance multimodal interfaces for future map",
"title": ""
},
{
"docid": "87552ea79b92986de3ce5306ef0266bc",
"text": "This paper presents a novel secondary frequency and voltage control method for islanded microgrids based on distributed cooperative control. The proposed method utilizes a sparse communication network where each DG unit only requires local and its neighbors’ information to perform control actions. The frequency controller restores the system frequency to the nominal value while maintaining the equal generation cost increment value among DG units. The voltage controller simultaneously achieves the critical bus voltage restoration and accurate reactive power sharing. Subsequently, the case when the DG unit ac-side voltage reaches its limit value is discussed and a controller output limitation method is correspondingly provided to selectively realize the desired control objective. This paper also provides a small-signal dynamic model of the microgrid with the proposed controller to evaluate the system dynamic performance. Finally, simulation results on a microgrid test system are presented to validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "e75b7c2fcdfc19a650d7da4e6ae643a2",
"text": "With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services.",
"title": ""
},
{
"docid": "b41d56e726628673d12b9efcb715a69c",
"text": "Ten new phenylpropanoid glucosides, tadehaginosides A-J (1-10), and the known compound tadehaginoside (11) were obtained from Tadehagi triquetrum. These phenylpropanoid glucosides were structurally characterized through extensive physical and chemical analyses. Compounds 1 and 2 represent the first set of dimeric derivatives of tadehaginoside with an unusual bicyclo[2.2.2]octene skeleton, whereas compounds 3 and 4 contain a unique cyclobutane basic core in their carbon scaffolds. The effects of these compounds on glucose uptake in C2C12 myotubes were evaluated. Compounds 3-11, particularly 4, significantly increased the basal and insulin-elicited glucose uptake. The results from molecular docking, luciferase analyses, and ELISA indicated that the increased glucose uptake may be due to increases in peroxisome proliferator-activated receptor γ (PPARγ) activity and glucose transporter-4 (GLUT-4) expression. These results indicate that the isolated phenylpropanoid glucosides, particularly compound 4, have the potential to be developed into antidiabetic compounds.",
"title": ""
},
{
"docid": "b97208934c9475bc9d9bb3a095826a15",
"text": "Article history: Received 12 February 2014 Received in revised form 13 August 2014 Accepted 29 August 2014 Available online 8 September 2014",
"title": ""
},
{
"docid": "2c226c7be6acf725190c72a64bfcdf91",
"text": "The past decade has witnessed the rapid evolution in blockchain technologies, which has attracted tremendous interests from both the research communities and industries. The blockchain network was originated from the Internet financial sector as a decentralized, immutable ledger system for transactional data ordering. Nowadays, it is envisioned as a powerful backbone/framework for decentralized data processing and datadriven self-organization in flat, open-access networks. In particular, the plausible characteristics of decentralization, immutability and self-organization are primarily owing to the unique decentralized consensus mechanisms introduced by blockchain networks. This survey is motivated by the lack of a comprehensive literature review on the development of decentralized consensus mechanisms in blockchain networks. In this survey, we provide a systematic vision of the organization of blockchain networks. By emphasizing the unique characteristics of incentivized consensus in blockchain networks, our in-depth review of the state-ofthe-art consensus protocols is focused on both the perspective of distributed consensus system design and the perspective of incentive mechanism design. From a game-theoretic point of view, we also provide a thorough review on the strategy adoption for self-organization by the individual nodes in the blockchain backbone networks. Consequently, we provide a comprehensive survey on the emerging applications of the blockchain networks in a wide range of areas. We highlight our special interest in how the consensus mechanisms impact these applications. Finally, we discuss several open issues in the protocol design for blockchain consensus and the related potential research directions.",
"title": ""
},
{
"docid": "d87f336cc82cbd29df1f04095d98a7fb",
"text": "The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades. Moreover, recent studies indicate that these metrics have become targets and follow Goodhart’s Law, according to which “when a measure becomes a target, it ceases to be a good measure.” In this study, we analyzed over 120 million papers to examine how the academic publishing world has evolved over the last century. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening. In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists. Measures such as a journal’s impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors. Moreover, by analyzing properties of over 2600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department. Academic publishing has changed considerably; now we need to reconsider how we measure success. Multimedia Links I Interactive Data Visualization I Code Tutorials I Fields-of-Study Features Table",
"title": ""
},
{
"docid": "1fba9ed825604e8afde8459a3d3dc0c0",
"text": "Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a \"learning via translation\" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation. Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a CycleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets.",
"title": ""
},
{
"docid": "b75336a7470fe2b002e742dbb6bfa8d5",
"text": "In Intelligent Tutoring System (ITS), tracing the student's knowledge state during learning has been studied for several decades in order to provide more supportive learning instructions. In this paper, we propose a novel model for knowledge tracing that i) captures students' learning ability and dynamically assigns students into distinct groups with similar ability at regular time intervals, and ii) combines this information with a Recurrent Neural Network architecture known as Deep Knowledge Tracing. Experimental results confirm that the proposed model is significantly better at predicting student performance than well known state-of-the-art techniques for student modelling.",
"title": ""
},
{
"docid": "238620ca0d9dbb9a4b11756630db5510",
"text": "this planet and many oceanic and maritime applications seem relatively slow in exploiting the state-of-the-art info-communication technologies. The natural and man-made disasters that have taken place over the last few years have aroused significant interest in monitoring oceanic environments for scientific, environmental, commercial, safety, homeland security and military needs. The shipbuilding and offshore engineering industries are also increasingly interested in technologies like sensor networks as an economically viable alternative to currently adopted and costly methods used in seismic monitoring, structural health monitoring, installation and mooring, etc. Underwater sensor networks (UWSNs) are the enabling technology for wide range of applications like monitoring the strong influences and impact of climate regulation, nutrient production, oil retrieval and transportation The underwater environment differs from the terrestrial radio environment both in terms of its energy costs and channel propagation phenomena. The underwater channel is characterized by long propagation times and frequency-dependent attenuation that is highly affected by the distance between nodes as well as by the link orientation. Some of other issues in which UWSNs differ from terrestrial are limited bandwidth, constrained battery power, more failure of sensors because of fouling and corrosion, etc. This paper presents several fundamental key aspects and architectures of UWSNs, emerging research issues of underwater sensor networks and exposes the researchers into networking of underwater communication devices for exciting ocean monitoring and exploration applications. I. INTRODUCTION The Earth is a water planet. Around 70% of the surface of earth is covered by water. This is largely unexplored area and recently it has fascinated humans to explore it. Natural or man-made disasters that have taken place over the last few years have aroused significant interest in monitoring oceanic environments for scientific, environmental, commercial, safety, homeland security and military needs. The shipbuilding and offshore engineering industries are also increasingly interested in technologies like wireless sensor",
"title": ""
},
{
"docid": "85657981b55e3a87e74238cd373b3db6",
"text": "INTRODUCTION\nLung cancer mortality rates remain at unacceptably high levels. Although mitochondrial dysfunction is a characteristic of most tumor types, mitochondrial dynamics are often overlooked. Altered rates of mitochondrial fission and fusion are observed in lung cancer and can influence metabolic function, proliferation and cell survival.\n\n\nAREAS COVERED\nIn this review, the authors outline the mechanisms of mitochondrial fission and fusion. They also identify key regulatory proteins and highlight the roles of fission and fusion in metabolism and other cellular functions (e.g., proliferation, apoptosis) with an emphasis on lung cancer and the interaction with known cancer biomarkers. They also examine the current therapeutic strategies reported as altering mitochondrial dynamics and review emerging mitochondria-targeted therapies.\n\n\nEXPERT OPINION\nMitochondrial dynamics are an attractive target for therapeutic intervention in lung cancer. Mitochondrial dysfunction, despite its molecular heterogeneity, is a common abnormality of lung cancer. Targeting mitochondrial dynamics can alter mitochondrial metabolism, and many current therapies already non-specifically affect mitochondrial dynamics. A better understanding of mitochondrial dynamics and their interaction with currently identified cancer 'drivers' such as Kirsten-Rat Sarcoma Viral Oncogene homolog will lead to the development of novel therapeutics.",
"title": ""
},
{
"docid": "bed9bdf4d4965610b85378f2fdbfab2a",
"text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.",
"title": ""
},
{
"docid": "809384abcd6e402c1b30c3d2dfa75aa1",
"text": "Traditionally, psychiatry has offered clinical insights through keen behavioral observation and a deep study of emotion. With the subsequent biological revolution in psychiatry displacing psychoanalysis, some psychiatrists were concerned that the field shifted from “brainless” to “mindless.”1 Over the past 4 decades, behavioral expertise, once the strength of psychiatry, has diminished in importanceaspsychiatricresearchfocusedonpharmacology,genomics, and neuroscience, and much of psychiatric practicehasbecomeaseriesofbriefclinical interactionsfocused on medication management. In research settings, assigning a diagnosis from the Diagnostic and Statistical Manual of Mental Disorders has become a surrogate for behavioral observation. In practice, few clinicians measure emotion, cognition, or behavior with any standard, validated tools. Some recent changes in both research and practice are promising. The National Institute of Mental Health has led an effort to create a new diagnostic approach for researchers that is intended to combine biological, behavioral, and social factors to create “precision medicine for psychiatry.”2 Although this Research Domain Criteria project has been controversial, the ensuing debate has been",
"title": ""
},
{
"docid": "64d3ecaa2f9e850cb26aac0265260aff",
"text": "The case of the Frankfurt Airport attack in 2011 in which a 21-year-old man shot several U.S. soldiers, murdering 2 U.S. airmen and severely wounding 2 others, is assessed with the Terrorist Radicalization Assessment Protocol (TRAP-18). The study is based on an extensive qualitative analysis of investigation and court files focusing on the complex interconnection among offender personality, specific opportunity structures, and social contexts. The role of distal psychological factors and proximal warning behaviors in the run up to the deed are discussed. Although in this case the proximal behaviors of fixation on a cause and identification as a “soldier” for the cause developed over years, we observed only a very brief and accelerated pathway toward the violent act. This represents an important change in the demands placed upon threat assessors.",
"title": ""
}
] | scidocsrr |
f97d72f8e43ed080e21db780ff110aa4 | Tropical rat mites (Ornithonyssus bacoti) - serious ectoparasites. | [
{
"docid": "5d7d7a49b254e08c95e40a3bed0aa10e",
"text": "Five mentally handicapped individuals living in a home for disabled persons in Southern Germany were seen in our outpatient department with pruritic, red papules predominantly located in groups on the upper extremities, neck, upper trunk and face. Over several weeks 40 inhabitants and 5 caretakers were affected by the same rash. Inspection of their home and the sheds nearby disclosed infestation with rat populations and mites. Finally the diagnosis of tropical rat mite dermatitis was made by the identification of the arthropod Ornithonyssus bacoti or so-called tropical rat mite. The patients were treated with topical corticosteroids and antihistamines. After elimination of the rats and disinfection of the rooms by a professional exterminator no new cases of rat mite dermatitis occurred. The tropical rat mite is an external parasite occurring on rats, mice, gerbils, hamsters and various other small mammals. When the principal animal host is not available, human beings can become the victim of mite infestation.",
"title": ""
}
] | [
{
"docid": "447e62529ed6b1b428e6edd78aabb637",
"text": "Dexterity robotic hands can (Cummings, 1996) greatly enhance the functionality of humanoid robots, but the making of such hands with not only human-like appearance but also the capability of performing the natural movement of social robots is a challenging problem. The first challenge is to create the hand’s articulated structure and the second challenge is to actuate it to move like a human hand. A robotic hand for humanoid robot should look and behave human like. At the same time, it also needs to be light and cheap for widely used purposes. We start with studying the biomechanical features of a human hand and propose a simplified mechanical model of robotic hands, which can achieve the important local motions of the hand. Then, we use 3D modeling techniques to create a single interlocked hand model that integrates pin and ball joints to our hand model. Compared to other robotic hands, our design saves the time required for assembling and adjusting, which makes our robotic hand ready-to-use right after the 3D printing is completed. Finally, the actuation of the hand is realized by cables and motors. Based on this approach, we have designed a cost-effective, 3D printable, compact, and lightweight robotic hand. Our robotic hand weighs 150 g, has 15 joints, which are similar to a real human hand, and 6 Degree of Freedom (DOFs). It is actuated by only six small size actuators. The wrist connecting part is also integrated into the hand model and could be customized for different robots such as Nadine robot (Magnenat Thalmann et al., 2017). The compact servo bed can be hidden inside the Nadine robot’s sleeve and the whole robotic hand platform will not cause extra load to her arm as the total weight (150 g robotic hand and 162 g artificial skin) is almost the same as her previous unarticulated robotic hand which is 348 g. The paper also shows our test results with and without silicon artificial hand skin, and on Nadine robot.",
"title": ""
},
{
"docid": "7d0dfce24bd539cb790c0c25348d075d",
"text": "When learning from positive and unlabelled data, it is a strong assumption that the positive observations are randomly sampled from the distribution of X conditional on Y = 1, where X stands for the feature and Y the label. Most existing algorithms are optimally designed under the assumption. However, for many realworld applications, the observed positive examples are dependent on the conditional probability P (Y = 1|X) and should be sampled biasedly. In this paper, we assume that a positive example with a higher P (Y = 1|X) is more likely to be labelled and propose a probabilistic-gap based PU learning algorithms. Specically, by treating the unlabelled data as noisy negative examples, we could automatically label a group positive and negative examples whose labels are identical to the ones assigned by a Bayesian optimal classier with a consistency guarantee. e relabelled examples have a biased domain, which is remedied by the kernel mean matching technique. e proposed algorithm is model-free and thus do not have any parameters to tune. Experimental results demonstrate that our method works well on both generated and real-world datasets. ∗UBTECH Sydney Articial Intelligence Centre and the School of Information Technologies, Faculty of Engineering and Information Technologies, e University of Sydney, Darlington, NSW 2008, Australia, fehe7727@uni.sydney.edu.au; tongliang.liu@sydney.edu.au; dacheng.tao@sydney.edu.au. †Faculty of Information Technology, Monash University, Clayton, VIC 3800, Australia, geo.webb@monash.edu. 1 ar X iv :1 80 8. 02 18 0v 1 [ cs .L G ] 7 A ug 2 01 8",
"title": ""
},
{
"docid": "af0178d0bb154c3995732e63b94842ca",
"text": "Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains.",
"title": ""
},
{
"docid": "b4ac5df370c0df5fdb3150afffd9158b",
"text": "The aggregation of many independent estimates can outperform the most accurate individual judgement 1–3 . This centenarian finding 1,2 , popularly known as the 'wisdom of crowds' 3 , has been applied to problems ranging from the diagnosis of cancer 4 to financial forecasting 5 . It is widely believed that social influence undermines collective wisdom by reducing the diversity of opinions within the crowd. Here, we show that if a large crowd is structured in small independent groups, deliberation and social influence within groups improve the crowd’s collective accuracy. We asked a live crowd (N = 5,180) to respond to general-knowledge questions (for example, \"What is the height of the Eiffel Tower?\"). Participants first answered individually, then deliberated and made consensus decisions in groups of five, and finally provided revised individual estimates. We found that averaging consensus decisions was substantially more accurate than aggregating the initial independent opinions. Remarkably, combining as few as four consensus choices outperformed the wisdom of thousands of individuals. The collective wisdom of crowds often provides better answers to problems than individual judgements. Here, a large experiment that split a crowd into many small deliberative groups produced better estimates than the average of all answers in the crowd.",
"title": ""
},
{
"docid": "7fe0c40d6f62d24b4fb565d3341c1422",
"text": "Instead of a standard support vector machine (SVM) that classifies points by assigning them to one of two disjoint half-spaces, points are classified by assigning them to the closest of two parallel planes (in input or feature space) that are pushed apart as far as possible. This formulation, which can also be interpreted as regularized least squares and considered in the much more general context of regularized networks [8, 9], leads to an extremely fast and simple algorithm for generating a linear or nonlinear classifier that merely requires the solution of a single system of linear equations. In contrast, standard SVMs solve a quadratic or a linear program that require considerably longer computational time. Computational results on publicly available datasets indicate that the proposed proximal SVM classifier has comparable test set correctness to that of standard SVM classifiers, but with considerably faster computational time that can be an order of magnitude faster. The linear proximal SVM can easily handle large datasets as indicated by the classification of a 2 million point 10-attribute set in 20.8 seconds. All computational results are based on 6 lines of MATLAB code.",
"title": ""
},
{
"docid": "f01a1679095a163894660cb0748334d3",
"text": "We present a novel approach for event extraction and abstraction from movie descriptions. Our event frame consists of ‘who”, “did what” “to whom”, “where”, and “when”. We formulate our problem using a recurrent neural network, enhanced with structural features extracted from syntactic parser, and trained using curriculum learning by progressively increasing the difficulty of the sentences. Our model serves as an intermediate step towards question answering systems, visual storytelling, and story completion tasks. We evaluate our approach on MovieQA dataset.",
"title": ""
},
{
"docid": "130efef512294d14094a900693efebfd",
"text": "Metaphor comprehension involves an interaction between the meaning of the topic and the vehicle terms of the metaphor. Meaning is represented by vectors in a high-dimensional semantic space. Predication modifies the topic vector by merging it with selected features of the vehicle vector. The resulting metaphor vector can be evaluated by comparing it with known landmarks in the semantic space. Thus, metaphorical prediction is treated in the present model in exactly the same way as literal predication. Some experimental results concerning metaphor comprehension are simulated within this framework, such as the nonreversibility of metaphors, priming of metaphors with literal statements, and priming of literal statements with metaphors.",
"title": ""
},
{
"docid": "c8e23bc60783125d5bf489cddd3e8290",
"text": "An efficient probabilistic algorithm for the concurrent mapping and localization problem that arises in mobile robotics is presented. The algorithm addresses the problem in which a team of robots builds a map on-line while simultaneously accommodating errors in the robots’ odometry. At the core of the algorithm is a technique that combines fast maximum likelihood map growing with a Monte Carlo localizer that uses particle representations. The combination of both yields an on-line algorithm that can cope with large odometric errors typically found when mapping environments with cycles. The algorithm can be implemented in a distributed manner on multiple robot platforms, enabling a team of robots to cooperatively generate a single map of their environment. Finally, an extension is described for acquiring three-dimensional maps, which capture the structure and visual appearance of indoor environments in three dimensions. KEY WORDS—mobile robotics, map acquisition, localization, robotic exploration, multi-robot systems, threedimensional modeling",
"title": ""
},
{
"docid": "b69f7c0db77c3012ae5e550b23a313fb",
"text": "Speckle noise is an inherent property of medical ultrasound imaging, and it generally tends to reduce the image resolution and contrast, thereby reducing the diagnostic value of this imaging modality. As a result, speckle noise reduction is an important prerequisite, whenever ultrasound imaging is used for tissue characterization. Among the many methods that have been proposed to perform this task, there exists a class of approaches that use a multiplicative model of speckled image formation and take advantage of the logarithmical transformation in order to convert multiplicative speckle noise into additive noise. The common assumption made in a dominant number of such studies is that the samples of the additive noise are mutually uncorrelated and obey a Gaussian distribution. The present study shows conceptually and experimentally that this assumption is oversimplified and unnatural. Moreover, it may lead to inadequate performance of the speckle reduction methods. The study introduces a simple preprocessing procedure, which modifies the acquired radio-frequency images (without affecting the anatomical information they contain), so that the noise in the log-transformation domain becomes very close in its behavior to a white Gaussian noise. As a result, the preprocessing allows filtering methods based on assuming the noise to be white and Gaussian, to perform in nearly optimal conditions. The study evaluates performances of three different, nonlinear filters - wavelet denoising, total variation filtering, and anisotropic diffusion - and demonstrates that, in all these cases, the proposed preprocessing significantly improves the quality of resultant images. Our numerical tests include a series of computer-simulated and in vivo experiments.",
"title": ""
},
{
"docid": "84f2072f32d2a29d372eef0f4622ddce",
"text": "This paper presents a new methodology for synthesis of broadband equivalent circuits for multi-port high speed interconnect systems from numerically obtained and/or measured frequency-domain and time-domain response data. The equivalent circuit synthesis is based on the rational function fitting of admittance matrix, which combines the frequency-domain vector fitting process, VECTFIT with its time-domain analog, TDVF to yield a robust and versatile fitting algorithm. The generated rational fit is directly converted into a SPICE-compatible circuit after passivity enforcement. The accuracy of the resulting algorithm is demonstrated through its application to the fitting of the admittance matrix of a power/ground plane structure",
"title": ""
},
{
"docid": "e36e0c8659b8bae3acf0f178fce362c3",
"text": "Clinical data describing the phenotypes and treatment of patients represents an underused data source that has much greater research potential than is currently realized. Mining of electronic health records (EHRs) has the potential for establishing new patient-stratification principles and for revealing unknown disease correlations. Integrating EHR data with genetic data will also give a finer understanding of genotype–phenotype relationships. However, a broad range of ethical, legal and technical reasons currently hinder the systematic deposition of these data in EHRs and their mining. Here, we consider the potential for furthering medical research and clinical care using EHR data and the challenges that must be overcome before this is a reality.",
"title": ""
},
{
"docid": "56c5ec77f7b39692d8b0d5da0e14f82a",
"text": "Using tweets extracted from Twitter during the Australian 2010-2011 floods, social network analysis techniques were used to generate and analyse the online networks that emerged at that time. The aim was to develop an understanding of the online communities for the Queensland, New South Wales and Victorian floods in order to identify active players and their effectiveness in disseminating critical information. A secondary goal was to identify important online resources disseminated by these communities. Important and effective players during the Queensland floods were found to be: local authorities (mainly the Queensland Police Services), political personalities (Queensland Premier, Prime Minister, Opposition Leader, Member of Parliament), social media volunteers, traditional media reporters, and people from not-for-profit, humanitarian, and community associations. A range of important resources were identified during the Queensland flood; however, they appeared to be of a more general information nature rather than vital information and updates on the disaster. Unlike Queensland, there was no evidence of Twitter activity from the part of local authorities and the government in the New South Wales and Victorian floods. Furthermore, the level of Twitter activity during the NSW floods was almost nil. Most of the active players during the NSW and Victorian floods were volunteers who were active during the Queensland floods. Given the positive results obtained by the active involvement of the local authorities and government officials in Queensland, and the increasing adoption of Twitter in other parts of the world for emergency situations, it seems reasonable to push for greater adoption of Twitter from local and federal authorities Australia-wide during periods of mass emergencies.",
"title": ""
},
{
"docid": "9d37baf5ce33826a59cc7bd0fd7955c0",
"text": "A digital image analysis method previously used to evaluate leaf color changes due to nutritional changes was modified to measure the severity of several foliar fungal diseases. Images captured with a flatbed scanner or digital camera were analyzed with a freely available software package, Scion Image, to measure changes in leaf color caused by fungal sporulation or tissue damage. High correlations were observed between the percent diseased leaf area estimated by Scion Image analysis and the percent diseased leaf area from leaf drawings. These drawings of various foliar diseases came from a disease key previously developed to aid in visual estimation of disease severity. For leaves of Nicotiana benthamiana inoculated with different spore concentrations of the anthracnose fungus Colletotrichum destructivum, a high correlation was found between the percent diseased tissue measured by Scion Image analysis and the number of leaf spots. The method was adapted to quantify percent diseased leaf area ranging from 0 to 90% for anthracnose of lily-of-the-valley, apple scab, powdery mildew of phlox and rust of golden rod. In some cases, the brightness and contrast of the images were adjusted and other modifications were made, but these were standardized for each disease. Detached leaves were used with the flatbed scanner, but a method using attached leaves with a digital camera was also developed to make serial measurements of individual leaves to quantify symptom progression. This was successfully applied to monitor anthracnose on N. benthamiana leaves. Digital image analysis using Scion Image software is a useful tool for quantifying a wide variety of fungal interactions with plant leaves.",
"title": ""
},
{
"docid": "d46434bbbf73460bf422ebe4bd65b590",
"text": "We present an efficient block-diagonal approximation to the Gauss-Newton matrix for feedforward neural networks. Our resulting algorithm is competitive against state-of-the-art first-order optimisation methods, with sometimes significant improvement in optimisation performance. Unlike first-order methods, for which hyperparameter tuning of the optimisation parameters is often a laborious process, our approach can provide good performance even when used with default settings. A side result of our work is that for piecewise linear transfer functions, the network objective function can have no differentiable local maxima, which may partially explain why such transfer functions facilitate effective optimisation.",
"title": ""
},
{
"docid": "7830c4737197e84a247349f2e586424e",
"text": "This paper describes VPL, a Virtual Programming Lab module for Moodle, developed at the University of Las Palmas of Gran Canaria (ULPGC) and released for free uses under GNU/GPL license. For the students, it is a simple development environment with auto evaluation capabilities. For the instructors, it is a students' work management system, with features to facilitate the preparation of assignments, manage the submissions, check for plagiarism, and do assessments with the aid of powerful and flexible assessment tools based on program testing, all of that being independent of the programming language used for the assignments and taken into account critical security issues.",
"title": ""
},
{
"docid": "1241bc6b7d3522fe9e285ae843976524",
"text": "In many new high performance designs, the leakage component of power consumption is comparable to the switching component. Reports indicate that 40% or even higher percentage of the total power consumption is due to the leakage of transistors. This percentage will increase with technology scaling unless effective techniques are introduced to bring leakage under control. This article focuses on circuit optimization and design automation techniques to accomplish this goal. The first part of the article provides an overview of basic physics and process scaling trends that have resulted in a significant increase in the leakage currents in CMOS circuits. This part also distinguishes between the standby and active components of the leakage current. The second part of the article describes a number of circuit optimization techniques for controlling the standby leakage current, including power gating and body bias control. The third part of the article presents techniques for active leakage control, including use of multiple-threshold cells, long channel devices, input vector design, transistor stacking to switching noise, and sizing with simultaneous threshold and supply voltage assignment.",
"title": ""
},
{
"docid": "51cd0219f96b4ae6984df37ed439bbaa",
"text": "This paper introduces an unsupervised framework to extract semantically rich features for video representation. Inspired by how the human visual system groups objects based on motion cues, we propose a deep convolutional neural network that disentangles motion, foreground and background information. The proposed architecture consists of a 3D convolutional feature encoder for blocks of 16 frames, which is trained for reconstruction tasks over the first and last frames of the sequence. A preliminary supervised experiment was conducted to verify the feasibility of proposed method by training the model with a fraction of videos from the UCF-101 dataset taking as ground truth the bounding boxes around the activity regions. Qualitative results indicate that the network can successfully segment foreground and background in videos as well as update the foreground appearance based on disentangled motion features. The benefits of these learned features are shown in a discriminative classification task, where initializing the network with the proposed pretraining method outperforms both random initialization and autoencoder pretraining. Our model and source code are publicly available at https: //allenovo.github.io/cvprw17_webpage/ .",
"title": ""
},
{
"docid": "ad9a94a4deafceedccdd5f4164cde293",
"text": "In this paper, we investigate the application of machine learning techniques and word embeddings to the task of Recognizing Textual Entailment (RTE) in Social Media. We look at a manually labeled dataset (Lendvai et al., 2016) consisting of user generated short texts posted on Twitter (tweets) and related to four recent media events (the Charlie Hebdo shooting, the Ottawa shooting, the Sydney Siege, and the German Wings crash) and test to what extent neural techniques and embeddings are able to distinguish between tweets that entail or contradict each other or that claim unrelated things. We obtain comparable results to the state of the art in a train-test setting, but we show that, due to the noisy aspect of the data, results plummet in an evaluation strategy crafted to better simulate a real-life train-test scenario.",
"title": ""
},
{
"docid": "896fe681f79ef025a6058a51dd4f19c0",
"text": "Semantic parsing is the construction of a complete, formal, symbolic meaning representation of a sentence. While it is crucial to natural language understanding, the problem of semantic parsing has received relatively little attention from the machine learning community. Recent work on natural language understanding has mainly focused on shallow semantic analysis, such as word-sense disambiguation and semantic role labeling. Semantic parsing, on the other hand, involves deep semantic analysis in which word senses, semantic roles and other components are combined to produce useful meaning representations for a particular application domain (e.g. database query). Prior research in machine learning for semantic parsing is mainly based on inductive logic programming or deterministic parsing, which lack some of the robustness that characterizes statistical learning. Existing statistical approaches to semantic parsing, however, are mostly concerned with relatively simple application domains in which a meaning representation is no more than a single semantic frame. In this proposal, we present a novel statistical approach to semantic parsing, WASP, which can handle meaning representations with a nested structure. The WASP algorithm learns a semantic parser given a set of sentences annotated with their correct meaning representations. The parsing model is based on the synchronous context-free grammar, where each rule maps a natural-language substring to its meaning representation. The main innovation of the algorithm is its use of state-of-the-art statistical machine translation techniques. A statistical word alignment model is used for lexical acquisition, and the parsing model itself can be seen as an instance of a syntax-based translation model. In initial evaluation on several real-world data sets, we show that WASP performs favorably in terms of both accuracy and coverage compared to existing learning methods requiring similar amount of supervision, and shows better robustness to variations in task complexity and word order. In future work, we intend to pursue several directions in developing accurate semantic parsers for a variety of application domains. This will involve exploiting prior knowledge about the natural-language syntax and the application domain. We also plan to construct a syntax-aware word-based alignment model for lexical acquisition. Finally, we will generalize the learning algorithm to handle contextdependent sentences and accept noisy training data.",
"title": ""
},
{
"docid": "6a455fd9c86feb287a3c5a103bb681de",
"text": "This paper presents two approaches to semantic search by incorporating Linked Data annotations of documents into a Generalized Vector Space Model. One model exploits taxonomic relationships among entities in documents and queries, while the other model computes term weights based on semantic relationships within a document. We publish an evaluation dataset with annotated documents and queries as well as user-rated relevance assessments. The evaluation on this dataset shows significant improvements of both models over traditional keyword based search.",
"title": ""
}
] | scidocsrr |
df7ea4f56972e28521968146f39b8ee3 | Machine Learning-based Software Testing: Towards a Classification Framework | [
{
"docid": "112ecbb8547619577962298fbe65eae1",
"text": "In the context of open source development or software evolution, developers often face test suites which have been developed with no apparent rationale and which may need to be augmented or refined to ensure sufficient dependability, or even reduced to meet tight deadlines. We refer to this process as the re-engineering of test suites. It is important to provide both methodological and tool support to help people understand the limitations of test suites and their possible redundancies, so as to be able to refine them in a cost effective manner. To address this problem in the case of black-box, Category-Partition testing, we propose a methodology and a tool based on machine learning that has shown promising results on a case study involving students as testers. 2009 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "b886b54f77168eab82e449b7e5cd3aac",
"text": "BACKGROUND\nLow desire is the most common sexual problem in women at midlife. Prevalence data are limited by lack of validated instruments or exclusion of un-partnered or sexually inactive women.\n\n\nAIM\nTo document the prevalence of and factors associated with low desire, sexually related personal distress, and hypoactive sexual desire dysfunction (HSDD) using validated instruments.\n\n\nMETHODS\nCross-sectional, nationally representative, community-based sample of 2,020 Australian women 40 to 65 years old.\n\n\nOUTCOMES\nLow desire was defined as a score no higher than 5.0 on the desire domain of the Female Sexual Function Index (FSFI); sexually related personal distress was defined as a score of at least 11.0 on the Female Sexual Distress Scale-Revised; and HSDD was defined as a combination of these scores. The Menopause Specific Quality of Life Questionnaire was used to document menopausal vasomotor symptoms. The Beck Depression Inventory-II was used to identify moderate to severe depressive symptoms (score ≥ 20).\n\n\nRESULTS\nThe prevalence of low desire was 69.3% (95% CI = 67.3-71.3), that of sexually related personal distress was 40.5% (95% CI = 38.4-42.6), and that of HSDD was 32.2% (95% CI = 30.1-34.2). Of women who were not partnered or sexually active, 32.4% (95% CI = 24.4-40.2) reported sexually related personal distress. Factors associated with HSDD in an adjusted logistic regression model included being partnered (odds ratio [OR] = 3.30, 95% CI = 2.46-4.41), consuming alcohol (OR = 1.48, 95% CI = 1.16-1.89), vaginal dryness (OR = 2.08, 95% CI = 1.66-2.61), pain during or after intercourse (OR = 1.63, 95% CI = 1.27-2.09), moderate to severe depressive symptoms (OR = 2.69, 95% CI 1.99-3.64), and use of psychotropic medication (OR = 1.42, 95% CI = 1.10-1.83). Vasomotor symptoms were not associated with low desire, sexually related personal distress, or HSDD.\n\n\nCLINICAL IMPLICATIONS\nGiven the high prevalence, clinicians should screen midlife women for HSDD.\n\n\nSTRENGTHS AND LIMITATIONS\nStrengths include the large size and representative nature of the sample and the use of validated tools. Limitations include the requirement to complete a written questionnaire in English. Questions within the FSFI limit the applicability of FSFI total scores, but not desire domain scores, in recently sexually inactive women, women without a partner, and women who do not engage in penetrative intercourse.\n\n\nCONCLUSIONS\nLow desire, sexually related personal distress, and HSDD are common in women at midlife, including women who are un-partnered or sexually inactive. Some factors associated with HSDD, such as psychotropic medication use and vaginal dryness, are modifiable or can be treated with safe and effective therapies. Worsley R, Bell RJ, Gartoulla P, Davis SR. Prevalence and Predictors of Low Sexual Desire, Sexually Related Personal Distress, and Hypoactive Sexual Desire Dysfunction in a Community-Based Sample of Midlife Women. J Sex Med 2017;14:675-686.",
"title": ""
},
{
"docid": "88cf953ba92b54f89cdecebd4153bee3",
"text": "In this paper, we propose a novel object detection framework named \"Deep Regionlets\" by establishing a bridge between deep neural networks and conventional detection schema for accurate generic object detection. Motivated by the abilities of regionlets for modeling object deformation and multiple aspect ratios, we incorporate regionlets into an end-to-end trainable deep learning framework. The deep regionlets framework consists of a region selection network and a deep regionlet learning module. Specifically, given a detection bounding box proposal, the region selection network provides guidance on where to select regions to learn the features from. The regionlet learning module focuses on local feature selection and transformation to alleviate local variations. To this end, we first realize non-rectangular region selection within the detection framework to accommodate variations in object appearance. Moreover, we design a “gating network\" within the regionlet leaning module to enable soft regionlet selection and pooling. The Deep Regionlets framework is trained end-to-end without additional efforts. We perform ablation studies and conduct extensive experiments on the PASCAL VOC and Microsoft COCO datasets. The proposed framework outperforms state-of-theart algorithms, such as RetinaNet and Mask R-CNN, even without additional segmentation labels.",
"title": ""
},
{
"docid": "8d4bf1b8b45bae6c506db5339e6d9025",
"text": "Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrixmatrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.",
"title": ""
},
{
"docid": "cb26bb277afc6d521c4c5960b35ed77d",
"text": "We propose a novel algorithm for the segmentation and prerecognition of offline handwritten Arabic text. Our character segmentation method over-segments each word, and then removes extra breakpoints using knowledge of letter shapes. On a test set of 200 images, 92.3% of the segmentation points were detected correctly, with 5.1% instances of over-segmentation. The prerecognition component annotates each detected letter with shape information, to be used for recognition in future work.",
"title": ""
},
{
"docid": "6131fdbfe28aaa303b1ee4c29a65f766",
"text": "Destination prediction is an essential task for many emerging location based applications such as recommending sightseeing places and targeted advertising based on destination. A common approach to destination prediction is to derive the probability of a location being the destination based on historical trajectories. However, existing techniques using this approach suffer from the “data sparsity problem”, i.e., the available historical trajectories is far from being able to cover all possible trajectories. This problem considerably limits the number of query trajectories that can obtain predicted destinations. We propose a novel method named Sub-Trajectory Synthesis (SubSyn) algorithm to address the data sparsity problem. SubSyn algorithm first decomposes historical trajectories into sub-trajectories comprising two neighbouring locations, and then connects the sub-trajectories into “synthesised” trajectories. The number of query trajectories that can have predicted destinations is exponentially increased by this means. Experiments based on real datasets show that SubSyn algorithm can predict destinations for up to ten times more query trajectories than a baseline algorithm while the SubSyn prediction algorithm runs over two orders of magnitude faster than the baseline algorithm. In this paper, we also consider the privacy protection issue in case an adversary uses SubSyn algorithm to derive sensitive location information of users. We propose an efficient algorithm to select a minimum number of locations a user has to hide on her trajectory in order to avoid privacy leak. Experiments also validate the high efficiency of the privacy protection algorithm.",
"title": ""
},
{
"docid": "aefade278a0af130e0c7923b704e2ee1",
"text": "Prediction of the risk in patients with upper gastrointestinal bleeding has been the subject of different studies for several decades. This study showed the significance of Forrest classification, used in initial endoscopic investigation for evaluation of bleeding lesion, for the prediction of rebleeding. Rockall and Blatchford risk score systems evaluate certain clinical, biochemical and endoscopic variables significant for the prediction of rebleeding as well as the final outcome of disease. The percentage of rebleeding in the group of studied patients in accordance with Forrest classification showed that the largest number of patients belonged to the FIIb group. The predictive evaluation of initial and definitive Rockall score was significantly associated with percentage of rebleeding, while Blatchfor score had boundary significance. Acta Medica Medianae 2007;46(4):38-43.",
"title": ""
},
{
"docid": "865cfae2da5ad3d1d10d21b1defdc448",
"text": "During the last decade, novel immunotherapeutic strategies, in particular antibodies directed against immune checkpoint inhibitors, have revolutionized the treatment of different malignancies leading to an improved survival of patients. Identification of immune-related biomarkers for diagnosis, prognosis, monitoring of immune responses and selection of patients for specific cancer immunotherapies is urgently required and therefore areas of intensive research. Easily accessible samples in particular liquid biopsies (body fluids), such as blood, saliva or urine, are preferred for serial tumor biopsies.Although monitoring of immune and tumor responses prior, during and post immunotherapy has led to significant advances of patients' outcome, valid and stable prognostic biomarkers are still missing. This might be due to the limited capacity of the technologies employed, reproducibility of results as well as assay stability and validation of results. Therefore solid approaches to assess immune regulation and modulation as well as to follow up the nature of the tumor in liquid biopsies are urgently required to discover valuable and relevant biomarkers including sample preparation, timing of the collection and the type of liquid samples. This article summarizes our knowledge of the well-known liquid material in a new context as liquid biopsy and focuses on collection and assay requirements for the analysis and the technical developments that allow the implementation of different high-throughput assays to detect alterations at the genetic and immunologic level, which could be used for monitoring treatment efficiency, acquired therapy resistance mechanisms and the prognostic value of the liquid biopsies.",
"title": ""
},
{
"docid": "0525d981721fc8a85bb4daef78b6cbe9",
"text": "Cloud computing environments provide on-demand resource provisioning, allowing applications to elastically scale. However, application benchmarks currently being used to test cloud management systems are not designed for this purpose. This results in resource underprovisioning and quality-of-service (QoS) violations when systems tested using these benchmarks are deployed in production environments. We present C-MART, a benchmark designed to emulate a modern web application running in a cloud computing environment. It is designed using the cloud computing paradigm of elastic scalability at every application tier and utilizes modern web-based technologies such as HTML5, AJAX, jQuery, and SQLite. C-MART consists of a web application, client emulator, deployment server, and scaling API. The deployment server automatically deploys and configures the test environment in orders of magnitude less time than current benchmarks. The scaling API allows users to define and provision their own customized datacenter. The client emulator generates the web workload for the application by emulating complex and varied client behaviors, including decisions based on page content and prior history. We show that C-MART can detect problems in management systems that previous benchmarks fail to identify, such as an increase from 4.4 to 50 percent error in predicting server CPU utilization and resource underprovisioning in 22 percent of QoS measurements.",
"title": ""
},
{
"docid": "c988dc0e9be171a5fcb555aedcdf67e3",
"text": "Online social networks, such as Facebook, are increasingly utilized by many people. These networks allow users to publish details about themselves and to connect to their friends. Some of the information revealed inside these networks is meant to be private. Yet it is possible to use learning algorithms on released data to predict private information. In this paper, we explore how to launch inference attacks using released social networking data to predict private information. We then devise three possible sanitization techniques that could be used in various situations. Then, we explore the effectiveness of these techniques and attempt to use methods of collective inference to discover sensitive attributes of the data set. We show that we can decrease the effectiveness of both local and relational classification algorithms by using the sanitization methods we described.",
"title": ""
},
{
"docid": "f262aba2003f986012bbec1a9c2fcb83",
"text": "Hemiplegic migraine is a rare form of migraine with aura that involves motor aura (weakness). This type of migraine can occur as a sporadic or a familial disorder. Familial forms of hemiplegic migraine are dominantly inherited. Data from genetic studies have implicated mutations in genes that encode proteins involved in ion transportation. However, at least a quarter of the large families affected and most sporadic cases do not have a mutation in the three genes known to be implicated in this disorder, suggesting that other genes are still to be identified. Results from functional studies indicate that neuronal hyperexcitability has a pivotal role in the pathogenesis of hemiplegic migraine. The clinical manifestations of hemiplegic migraine range from attacks with short-duration hemiparesis to severe forms with recurrent coma and prolonged hemiparesis, permanent cerebellar ataxia, epilepsy, transient blindness, or mental retardation. Diagnosis relies on a careful patient history and exclusion of potential causes of symptomatic attacks. The principles of management are similar to those for common varieties of migraine, except that vasoconstrictors, including triptans, are historically contraindicated but are often used off-label to stop the headache, and prophylactic treatment can include lamotrigine and acetazolamide.",
"title": ""
},
{
"docid": "1b60ded506c85edd798fe0759cce57fa",
"text": "The studies of plant trait/disease refer to the studies of visually observable patterns of a particular plant. Nowadays crops face many traits/diseases. Damage of the insect is one of the major trait/disease. Insecticides are not always proved efficient because insecticides may be toxic to some kind of birds. It also damages natural animal food chains. A common practice for plant scientists is to estimate the damage of plant (leaf, stem) because of disease by an eye on a scale based on percentage of affected area. It results in subjectivity and low throughput. This paper provides a advances in various methods used to study plant diseases/traits using image processing. The methods studied are for increasing throughput & reducing subjectiveness arising from human experts in detecting the plant diseases.",
"title": ""
},
{
"docid": "67d41a84050f3bf9bc004e7c1787a2bc",
"text": "Facial aging is a complex process individualized by interaction with exogenous and endogenous factors. The upper lip is one of the facial components by which facial attractiveness is defined. Upper lip aging is significantly influenced by maxillary bone and teeth. Aging of the cutaneous part can be aggravated by solar radiation and smoking. We provide a review about minimally invasive techniques for correction of aging signs of the upper lip with a tailored approach to patient’s characteristics. The treatment is based upon use of fillers, laser, and minor surgery. Die Alterung des Gesichts ist ein komplexer Prozess, welcher durch die Wechselwirkung exogener und endogener Faktoren individuell geprägt wird. Die Oberlippe zählt zu den fazialen Komponenten, welche die Attraktivität des Gesichts definieren. Die Alterung der Oberlippe wird durch den Oberkieferknochen und die Zähne beeinflusst. Alterungsprozesse des kutanen Anteils können durch Sonnenbestrahlung und Rauchen aggraviert werden. Die Autoren stellen eine Übersicht zur den minimalinvasiven Verfahren der Korrektur altersbedingter Veränderungen der Oberlippe mit Individualisierung je nach Patientenmerkmalen vor. Die Technik basiert auf der Nutzung von Fillern, Lasern und kleineren chirurgischen Eingriffen.",
"title": ""
},
{
"docid": "572be2eb18bd929c2b4e482f7d3e0754",
"text": "• Supervised learning --where the algorithm generates a function that maps inputs to desired outputs. One standard formulation of the supervised learning task is the classification problem: the learner is required to learn (to approximate the behavior of) a function which maps a vector into one of several classes by looking at several input-output examples of the function. • Unsupervised learning --which models a set of inputs: labeled examples are not available. • Semi-supervised learning --which combines both labeled and unlabeled examples to generate an appropriate function or classifier. • Reinforcement learning --where the algorithm learns a policy of how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback that guides the learning algorithm. • Transduction --similar to supervised learning, but does not explicitly construct a function: instead, tries to predict new outputs based on training inputs, training outputs, and new inputs. • Learning to learn --where the algorithm learns its own inductive bias based on previous experience.",
"title": ""
},
{
"docid": "47251c2ce233226b015a2482847dc48d",
"text": "Recent advances in computer graphics have made it possible to visualize mathematical models of biological structures and processes with unprecedented realism. The resulting images, animations, and interactive systems are useful as research and educational tools in developmental biology and ecology. Prospective applications also include computer-assisted landscape architecture, design of new varieties of plants, and crop yield prediction. In this paper we revisit foundations of the applications of L-systems to the modeling of plants, and we illustrate them using recently developed sample models.",
"title": ""
},
{
"docid": "9e5c123b6f744037436e0d5c917e8640",
"text": "Relational databases have limited support for data collaboration, where teams collaboratively curate and analyze large datasets. Inspired by software version control systems like git, we propose (a) a dataset version control system, giving users the ability to create, branch, merge, difference and search large, divergent collections of datasets, and (b) a platform, DATAHUB, that gives users the ability to perform collaborative data analysis building on this version control system. We outline the challenges in providing dataset version control at scale.",
"title": ""
},
{
"docid": "ea5697d417fe154be77d941c19d8a86e",
"text": "The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda1 and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages.",
"title": ""
},
{
"docid": "68a5192778ae203ea1e31ba4e29b4330",
"text": "Mobile crowdsensing is becoming a vital technique for environment monitoring, infrastructure management, and social computing. However, deploying mobile crowdsensing applications in large-scale environments is not a trivial task. It creates a tremendous burden on application developers as well as mobile users. In this paper we try to reveal the barriers hampering the scale-up of mobile crowdsensing applications, and to offer our initial thoughts on the potential solutions to lowering the barriers.",
"title": ""
},
{
"docid": "19d8b6ff70581307e0a00c03b059964f",
"text": "We propose a novel approach for analysing time series using complex network theory. We identify the recurrence matrix (calculated from time series) with the adjacency matrix of a complex network and apply measures for the characterisation of complex networks to this recurrence matrix. By using the logistic map, we illustrate the potential of these complex network measures for the detection of dynamical transitions. Finally, we apply the proposed approach to a marine palaeo-climate record and identify the subtle changes to the climate regime.",
"title": ""
},
{
"docid": "038064c2998a5da8664be1ba493a0326",
"text": "The bandit problem is revisited and considered under the PAC model. Our main contribution in this part is to show that given n arms, it suffices to pull the arms O( n 2 log 1 δ ) times to find an -optimal arm with probability of at least 1 − δ. This is in contrast to the naive bound of O( n 2 log n δ ). We derive another algorithm whose complexity depends on the specific setting of the rewards, rather than the worst case setting. We also provide a matching lower bound. We show how given an algorithm for the PAC model Multi-Armed Bandit problem, one can derive a batch learning algorithm for Markov Decision Processes. This is done essentially by simulating Value Iteration, and in each iteration invoking the multi-armed bandit algorithm. Using our PAC algorithm for the multi-armed bandit problem we improve the dependence on the number of actions.",
"title": ""
}
] | scidocsrr |
7405964a85c0b239ba7e1c7f80564e15 | A Kernel Fuzzy c-Means Clustering-Based Fuzzy Support Vector Machine Algorithm for Classification Problems With Outliers or Noises | [
{
"docid": "700d3e2cb64624df33ef411215d073ab",
"text": "A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer back-propagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting.",
"title": ""
}
] | [
{
"docid": "7fd33ebd4fec434dba53b15d741fdee4",
"text": "We present a data-efficient representation learning approach to learn video representation with small amount of labeled data. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. Our model effectively learns video representation from motion information on unlabeled videos. Our model significantly improves action recognition accuracy by a large margin (23.6%) compared to state-of-the-art CNN-based unsupervised representation learning methods trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.",
"title": ""
},
{
"docid": "1cc586730cf0c1fd57cf6ff7548abe24",
"text": "Researchers have proposed various methods to extract 3D keypoints from the surface of 3D mesh models over the last decades, but most of them are based on geometric methods, which lack enough flexibility to meet the requirements for various applications. In this paper, we propose a new method on the basis of deep learning by formulating the 3D keypoint detection as a regression problem using deep neural network (DNN) with sparse autoencoder (SAE) as our regression model. Both local information and global information of a 3D mesh model in multi-scale space are fully utilized to detect whether a vertex is a keypoint or not. SAE can effectively extract the internal structure of these two kinds of information and formulate highlevel features for them, which is beneficial to the regression model. Three SAEs are used to formulate the hidden layers of the DNN and then a logistic regression layer is trained to process the high-level features produced in the third SAE. Numerical experiments show that the proposed DNN based 3D keypoint detection algorithm outperforms current five state-of-the-art methods for various 3D mesh models.",
"title": ""
},
{
"docid": "c8be0e643c72c7abea1ad758ac2b49a8",
"text": "Visual attention plays an important role to understand images and demonstrates its effectiveness in generating natural language descriptions of images. On the other hand, recent studies show that language associated with an image can steer visual attention in the scene during our cognitive process. Inspired by this, we introduce a text-guided attention model for image captioning, which learns to drive visual attention using associated captions. For this model, we propose an exemplarbased learning approach that retrieves from training data associated captions with each image, and use them to learn attention on visual features. Our attention model enables to describe a detailed state of scenes by distinguishing small or confusable objects effectively. We validate our model on MSCOCO Captioning benchmark and achieve the state-of-theart performance in standard metrics.",
"title": ""
},
{
"docid": "96bddddd86976f4dff0b984ef062704b",
"text": "How do the structures of the medial temporal lobe contribute to memory? To address this question, we examine the neurophysiological correlates of both recognition and associative memory in the medial temporal lobe of humans, monkeys, and rats. These cross-species comparisons show that the patterns of mnemonic activity observed throughout the medial temporal lobe are largely conserved across species. Moreover, these findings show that neurons in each of the medial temporal lobe areas can perform both similar as well as distinctive mnemonic functions. In some cases, similar patterns of mnemonic activity are observed across all structures of the medial temporal lobe. In the majority of cases, however, the hippocampal formation and surrounding cortex signal mnemonic information in distinct, but complementary ways.",
"title": ""
},
{
"docid": "efd6856e774b258858c43d7746639317",
"text": "In this paper, we propose a vision-based robust vehicle distance estimation algorithm that supports motorists to rapidly perceive relative distance of oncoming and passing vehicles thereby minimizing the risk of hazardous circumstances. And, as it is expected, the silhouettes of background stationary objects may appear in the motion scene, which pop-up due to motion of the camera, which is mounted on dashboard of the host vehicle. To avoid the effect of false positive detection of stationary objects and to determine the ego motion a new Morphological Strip Matching Algorithm and Recursive Stencil Mapping Algorithm(MSM-RSMA)is proposed. A new series of stencils are created where non-stationary objects are taken off after detecting stationary objects by applying a shape matching technique to each image strip pair. Then the vertical shift is estimated recursively with new stencils with identified stationary background objects. Finally, relative comparison of known templates are used to estimate the distance, which is further certified by value obtained for vertical shift. We apply analysis of relative dimensions of bounding box of the detected vehicle with relevant templates to calculate the relative distance. We prove that our method is capable of providing a comparatively fast distance estimation while keeping its robustness in different environments changes.",
"title": ""
},
{
"docid": "01472364545392cad69b9c7e1f65f4bb",
"text": "The designing of power transmission network is a difficult task due to the complexity of power system. Due to complexity in the power system there is always a loss of the stability due to the fault. Whenever a fault is intercepted in system, the whole system goes to severe transients. These transients cause oscillation in phase angle which leads poor power quality. The nature of oscillation is increasing instead being sustained, which leads system failure in form of generator damage. To reduce and eliminate the unstable oscillations one needs to use a stabilizer which can generate a perfect compensatory signal in order to minimize the harmonics generated due to instability. This paper presents a Power System stabilizer to reduce oscillations due to small signal disturbance. Additionally, a hybrid approach is proposed using FOPID stabilizer with the PSS connected SMIB. Genetic algorithm (GA), Particle swarm optimization (PSO) and Grey Wolf Optimization (GWO) are used for the parameter tuning of the stabilizer. Reason behind the use of GA, PSO and GWO instead of conventional methods is that it search the parameter heuristically, which leads better results. The efficiency of proposed approach is observed by rotor angle and power angle deviations in the SMIB system.",
"title": ""
},
{
"docid": "e7519a25915e5bb5359d0365513cad40",
"text": "Statistical and machine learning algorithms are increasingly used to inform decisions that have large impacts on individuals’ lives. Examples include hiring [8], predictive policing [13], pre-trial risk assessment of recidivism[6, 2], and risk of violence while incarcerated [5]. In many of these cases, the outcome variable to which the predictive models are trained is observed with bias with respect to some legally protected classes. For example, police records do not constitute a representative sample of all crimes [12]. In particular, black drug users are arrested at a rate that is several times that of white drug users despite the fact that black and white populations are estimated by public health officials to use drugs at roughly the same rate [11]. Algorithms trained on such data will produce predictions that are biased against groups that are disproportionately represented in the training data. Several approaches have been proposed to correct unfair predictive models. The simplest approach is to exclude the protected variable(s) from the analysis, under the belief that doing so will result in “race-neutral” predictions [14]. Of course, simply excluding a protected variable is insufficient to avoid discriminatory predictions, as any included variables that are correlated with the protected variables still contain information about the protected characteristic. In the case of linear models, this phenomenon is well-known, and is referred to as omitted variable bias [4]. Another approach that has been proposed in the computer science literature is to remove information about the protected variables from the set of covariates to be used in predictive models [7, 3]. A third alternative is to modify the outcome variable. For example, [9] use a naive Bayes classifier to rank each observation and perturb the outcome such that predictions produced by the algorithm are independent of the protected variable. A discussion of several more algorithms for binary protected and outcome variables can be found in [10]. The approach we propose is most similar to [7], though we approach the problem from a statistical modeling perspective. We define a procedure consisting of a chain of conditional models. Within this framework, both protecting and adjusting variables of arbitrary type becomes natural. Whereas previous work has been limited to protecting only binary or categorical variables and adjusting a limited number of covariates, our proposed framework allows for an arbitrary number of variables",
"title": ""
},
{
"docid": "3ca7b7b8e07eb5943d6ce2acf9a6fa82",
"text": "Excessive heat generation and occurrence of partial discharge have been observed in end-turn stress grading (SG) system in form-wound machines under PWM voltage. In this paper, multi-winding stress grading (SG) system is proposed as a method to change resistance of SG per length. Although the maximum field at the edge of stator and CAT are in a trade-off relationship, analytical results suggest that we can suppress field and excessive heat generation at both stator and CAT edges by multi-winding of SG and setting the length of CAT appropriately. This is also experimentally confirmed by measuring potential distribution of model bar-coil and observing partial discharge and temperature rise.",
"title": ""
},
{
"docid": "2a7bd6fbce4fef6e319664090755858d",
"text": "AIM\nThis paper is a report of a study conducted to determine which occupational stressors are present in nurses' working environment; to describe and compare occupational stress between two educational groups of nurses; to estimate which stressors and to what extent predict nurses' work ability; and to determine if educational level predicts nurses' work ability.\n\n\nBACKGROUND\nNurses' occupational stress adversely affects their health and nursing quality. Higher educational level has been shown to have positive effects on the preservation of good work ability.\n\n\nMETHOD\nA cross-sectional study was conducted in 2006-2007. Questionnaires were distributed to a convenience sample of 1392 (59%) nurses employed at four university hospitals in Croatia (n = 2364). The response rate was 78% (n = 1086). Data were collected using the Occupational Stress Assessment Questionnaire and Work Ability Index Questionnaire.\n\n\nFINDINGS\nWe identified six major groups of occupational stressors: 'Organization of work and financial issues', 'public criticism', 'hazards at workplace', 'interpersonal conflicts at workplace', 'shift work' and 'professional and intellectual demands'. Nurses with secondary school qualifications perceived Hazards at workplace and Shift work as statistically significantly more stressful than nurses a with college degree. Predictors statistically significantly related with low work ability were: Organization of work and financial issues (odds ratio = 1.69, 95% confidence interval 122-236), lower educational level (odds ratio = 1.69, 95% confidence interval 122-236) and older age (odds ratio = 1.07, 95% confidence interval 1.05-1.09).\n\n\nCONCLUSION\nHospital managers should develop strategies to address and improve the quality of working conditions for nurses in Croatian hospitals. Providing educational and career prospects can contribute to decreasing nurses' occupational stress levels, thus maintaining their work ability.",
"title": ""
},
{
"docid": "159222cde67c2d08e0bde7996b422cd6",
"text": "Superficial thrombophlebitis of the dorsal vein of the penis, known as penile Mondor’s disease, is an uncommon genital disease. We report on a healthy 44-year-old man who presented with painful penile swelling, ecchymosis, and penile deviation after masturbation, which initially imitated a penile fracture. Thrombosis of the superficial dorsal vein of the penis without rupture of corpus cavernosum was found during surgical exploration. The patient recovered without erectile dysfunction.",
"title": ""
},
{
"docid": "1f05175a0dce51dcd7a1527dce2f1286",
"text": "The rapid growth in the volume of many real-world graphs (e.g., social networks, web graphs, and spatial networks) has led to the development of various vertex-centric distributed graph computing systems in recent years. However, real-world graphs from different domains have very different characteristics, which often create bottlenecks in vertex-centric parallel graph computation. We identify three such important characteristics from a wide spectrum of real-world graphs, namely (1)skewed degree distribution, (2)large diameter, and (3)(relatively) high density. Among them, only (1) has been studied by existing systems, but many real-world powerlaw graphs also exhibit the characteristics of (2) and (3). In this paper, we propose a block-centric framework, called Blogel, which naturally handles all the three adverse graph characteristics. Blogel programmers may think like a block and develop efficient algorithms for various graph problems. We propose parallel algorithms to partition an arbitrary graph into blocks efficiently, and blockcentric programs are then run over these blocks. Our experiments on large real-world graphs verified that Blogel is able to achieve orders of magnitude performance improvements over the state-ofthe-art distributed graph computing systems.",
"title": ""
},
{
"docid": "d761b2718cfcabe37b72768962492844",
"text": "In the most recent years, wireless communication networks have been facing a rapidly increasing demand for mobile traffic along with the evolvement of applications that require data rates of several 10s of Gbit/s. In order to enable the transmission of such high data rates, two approaches are possible in principle. The first one is aiming at systems operating with moderate bandwidths at 60 GHz, for example, where 7 GHz spectrum is dedicated to mobile services worldwide. However, in order to reach the targeted date rates, systems with high spectral efficiencies beyond 10 bit/s/Hz have to be developed, which will be very challenging. A second approach adopts moderate spectral efficiencies and requires ultra high bandwidths beyond 20 GHz. Such an amount of unregulated spectrum can be identified only in the THz frequency range, i.e. beyond 300 GHz. Systems operated at those frequencies are referred to as THz communication systems. The technology enabling small integrated transceivers with highly directive, steerable antennas becomes the key challenges at THz frequencies in face of the very high path losses. This paper gives an overview over THz communications, summarizing current research projects, spectrum regulations and ongoing standardization activities.",
"title": ""
},
{
"docid": "24fab96f67040ed6ac13ab0696b9421c",
"text": "In the past decade, resting-state functional MRI (R-fMRI) measures of brain activity have attracted considerable attention. Based on changes in the blood oxygen level-dependent signal, R-fMRI offers a novel way to assess the brain's spontaneous or intrinsic (i.e., task-free) activity with both high spatial and temporal resolutions. The properties of both the intra- and inter-regional connectivity of resting-state brain activity have been well documented, promoting our understanding of the brain as a complex network. Specifically, the topological organization of brain networks has been recently studied with graph theory. In this review, we will summarize the recent advances in graph-based brain network analyses of R-fMRI signals, both in typical and atypical populations. Application of these approaches to R-fMRI data has demonstrated non-trivial topological properties of functional networks in the human brain. Among these is the knowledge that the brain's intrinsic activity is organized as a small-world, highly efficient network, with significant modularity and highly connected hub regions. These network properties have also been found to change throughout normal development, aging, and in various pathological conditions. The literature reviewed here suggests that graph-based network analyses are capable of uncovering system-level changes associated with different processes in the resting brain, which could provide novel insights into the understanding of the underlying physiological mechanisms of brain function. We also highlight several potential research topics in the future.",
"title": ""
},
{
"docid": "dfb78a96f9af81aa3f4be1a28e4ce0a2",
"text": "This paper presents two ultra-high-speed SerDes dedicated for PAM4 and NRZ data. The PAM4 TX incorporates an output driver with 3-tap FFE and adjustable weighting to deliver clean outputs at 4 levels, and the PAM4 RX employs a purely linear full-rate CDR and CTLE/1-tap DFE combination to recover and demultiplex the data. NRZ TX includes a tree-structure MUX with built-in PLL and phase aligner. NRZ RX adopts linear PD with special vernier technique to handle the 56 Gb/s input data. All chips have been verified in silicon with reasonable performance, providing prospective design examples for next-generation 400 GbE.",
"title": ""
},
{
"docid": "2fa3e2a710cc124da80941545fbdffa4",
"text": "INTRODUCTION\nThe use of computer-generated 3-dimensional (3-D) anatomical models to teach anatomy has proliferated. However, there is little evidence that these models are educationally effective. The purpose of this study was to test the educational effectiveness of a computer-generated 3-D model of the middle and inner ear.\n\n\nMETHODS\nWe reconstructed a fully interactive model of the middle and inner ear from a magnetic resonance imaging scan of a human cadaver ear. To test the model's educational usefulness, we conducted a randomised controlled study in which 28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model. At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear.\n\n\nRESULTS\nThe intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001).\n\n\nDISCUSSION\nOur findings stand in contrast to the handful of previous randomised controlled trials that evaluated the effects of computer-generated 3-D anatomical models on learning. The equivocal and negative results of these previous studies may be due to the limitations of these studies (such as small sample size) as well as the limitations of the models that were studied (such as a lack of full interactivity). Given our positive results, we believe that further research is warranted concerning the educational effectiveness of computer-generated anatomical models.",
"title": ""
},
{
"docid": "6f77e74cd8667b270fae0ccc673b49a5",
"text": "GeneMANIA (http://www.genemania.org) is a flexible, user-friendly web interface for generating hypotheses about gene function, analyzing gene lists and prioritizing genes for functional assays. Given a query list, GeneMANIA extends the list with functionally similar genes that it identifies using available genomics and proteomics data. GeneMANIA also reports weights that indicate the predictive value of each selected data set for the query. Six organisms are currently supported (Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Mus musculus, Homo sapiens and Saccharomyces cerevisiae) and hundreds of data sets have been collected from GEO, BioGRID, Pathway Commons and I2D, as well as organism-specific functional genomics data sets. Users can select arbitrary subsets of the data sets associated with an organism to perform their analyses and can upload their own data sets to analyze. The GeneMANIA algorithm performs as well or better than other gene function prediction methods on yeast and mouse benchmarks. The high accuracy of the GeneMANIA prediction algorithm, an intuitive user interface and large database make GeneMANIA a useful tool for any biologist.",
"title": ""
},
{
"docid": "569f8890a294b69d688977fc235aef17",
"text": "Traditionally, voice communication over the local loop has been provided by wired systems. In particular, twisted pair has been the standard means of connection for homes and offices for several years. However in the recent past there has been an increased interest in the use of radio access technologies in local loops. Such systems which are now popular for their ease and low cost of installation and maintenance are called Wireless in Local Loop (WLL) systems. Subscribers' demands for greater capacity has grown over the years especially with the advent of the Internet. Wired local loops have responded to these increasing demands through the use of digital technologies such as ISDN and xDSL. Demands for enhanced data rates are being faced by WLL system operators too, thus entailing efforts towards more efficient bandwidth use. Multi-hop communication has already been studied extensively in Ad hoc network environments and has begun making forays into cellular systems as well. Multi-hop communication has been proven as one of the best ways to enhance throughput in a wireless network. Through this effort we study the issues involved in multi-hop communication in a wireless local loop system and propose a novel WLL architecture called Throughput enhanced Wireless in Local Loop (TWiLL). Through a realistic simulation model we show the tremendous performance improvement achieved by TWiLL over WLL. Traditional pricing schemes employed in single hop wireless networks cannot be applied in TWiLL -- a multi-hop environment. We also propose three novel cost reimbursement based pricing schemes which could be applied in such a multi-hop environment.",
"title": ""
},
{
"docid": "81f9a52b6834095cd7be70b39af0e7f0",
"text": "In this paper we present BatchDB, an in-memory database engine designed for hybrid OLTP and OLAP workloads. BatchDB achieves good performance, provides a high level of data freshness, and minimizes load interaction between the transactional and analytical engines, thus enabling real time analysis over fresh data under tight SLAs for both OLTP and OLAP workloads.\n BatchDB relies on primary-secondary replication with dedicated replicas, each optimized for a particular workload type (OLTP, OLAP), and a light-weight propagation of transactional updates. The evaluation shows that for standard TPC-C and TPC-H benchmarks, BatchDB can achieve competitive performance to specialized engines for the corresponding transactional and analytical workloads, while providing a level of performance isolation and predictable runtime for hybrid workload mixes (OLTP+OLAP) otherwise unmet by existing solutions.",
"title": ""
},
{
"docid": "1bfab561c8391dad6f0493fa7614feba",
"text": "Submission instructions: You should submit your answers via GradeScope and your code via Snap submission site. Submitting answers: Prepare answers to your homework into a single PDF file and submit it via http://gradescope.com. Make sure that answer to each question is on a separate page. This means you should submit a 14-page PDF (1 page for the cover sheet, 4 pages for the answers to question 1, 3 pages for answers to question 2, and 6 pages for question 3). On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. Put all the code for a single question into a single file and upload it. Questions We strongly encourage you to use Snap.py for Python. However, you can use any other graph analysis tool or package you want (SNAP for C++, NetworkX for Python, JUNG for Java, etc.). A question that occupied sociologists and economists as early as the 1900's is how do innovations (e.g. ideas, products, technologies, behaviors) diffuse (spread) within a society. One of the prominent researchers in the field is Professor Mark Granovetter who among other contributions introduced along with Thomas Schelling threshold models in sociology. In Granovetter's model, there is a population of individuals (mob) and for simplicity two behaviours (riot or not riot). • Threshold model: each individual i has a threshold t i that determines her behavior in the following way. If there are at least t i individuals that are rioting, then she will join the riot, otherwise she stays inactive. Here, it is implicitly assumed that each individual has full knowledge of the behavior of all other individuals in the group. Nodes with small threshold are called innovators (early adopters) and nodes with large threshold are called laggards (late adopters). Granovetter's threshold model has been successful in explain classical empirical adoption curves by relating them to thresholds in",
"title": ""
},
{
"docid": "6fc6167d1ef6b96d239fea03b9653865",
"text": "Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. In order to reduce this cost, several quantization schemes have gained attention recently with some focusing on weight quantization, and others focusing on quantizing activations. This paper proposes novel techniques that target weight and activation quantizations separately resulting in an overall quantized neural network (QNN). The activation quantization technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter α that is optimized during training to find the right quantization scale. The weight quantization scheme, statistics-aware weight binning (SAWB), finds the optimal scaling factor that minimizes the quantization error based on the statistical characteristics of the distribution of weights without the need for an exhaustive search. The combination of PACT and SAWB results in a 2-bit QNN that achieves state-of-the-art classification accuracy (comparable to full precision networks) across a range of popular models and datasets.",
"title": ""
}
] | scidocsrr |
763338ac575cee16828202cf29effc84 | Dominant Color Embedded Markov Chain Model for Object Image Retrieval | [
{
"docid": "0084d9c69d79a971e7139ab9720dd846",
"text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.",
"title": ""
}
] | [
{
"docid": "733b998017da30fe24521158a6aaa749",
"text": "Memristor crossbars were fabricated at 40 nm half-pitch, using nanoimprint lithography on the same substrate with Si metal-oxide-semiconductor field effect transistor (MOS FET) arrays to form fully integrated hybrid memory resistor (memristor)/transistor circuits. The digitally configured memristor crossbars were used to perform logic functions, to serve as a routing fabric for interconnecting the FETs and as the target for storing information. As an illustrative demonstration, the compound Boolean logic operation (A AND B) OR (C AND D) was performed with kilohertz frequency inputs, using resistor-based logic in a memristor crossbar with FET inverter/amplifier outputs. By routing the output signal of a logic operation back onto a target memristor inside the array, the crossbar was conditionally configured by setting the state of a nonvolatile switch. Such conditional programming illuminates the way for a variety of self-programmed logic arrays, and for electronic synaptic computing.",
"title": ""
},
{
"docid": "e51f7fde238b0896df22d196b8c59c1a",
"text": "The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions such as the grey-world and white patch assumptions. In this paper, 3D geometry models are used to determine which color constancy method to use for the different geometrical regions found in images. To this end, images are first classified into stages (rough 3D geometry models). According to the stage models, images are divided into different regions using hard and soft segmentation. After that, the best color constancy algorithm is selected for each geometry segment. As a result, light source estimation is tuned to the global scene geometry. Our algorithm opens the possibility to estimate the remote scene illumination color, by distinguishing nearby light source from distant illuminants. Experiments on large scale image datasets show that the proposed algorithm outperforms state-of-the-art single color constancy algorithms with an improvement of almost 14% of median angular error. When using an ideal classifier (i.e, all of the test images are correctly classified into stages), the performance of the proposed method achieves an improvement of 31% of median angular error compared to the best-performing single color constancy algorithm.",
"title": ""
},
{
"docid": "1cbac59380ee798a621d58a6de35361f",
"text": "With the fast development of modern power semiconductors in the last years, the development of current measurement technologies has to adapt to this evolution. The challenge for the power electronic engineer is to provide a current sensor with a high bandwidth and a high immunity against external interferences. Rogowski current transducers are popular for monitoring transient currents in power electronic applications without interferences caused by external magnetic fields. But the trend of even higher current and voltage gradients generates a dilemma regarding the Rogowski current transducer technology. On the one hand, a high current gradient requires a current sensor with a high bandwidth. On the other hand, high voltage gradients forces to use a shielding around the Rogowski coil in order to protect the measurement signal from a capacitive displacement current caused by an unavoidable capacitive coupling to the setup, which reduces the bandwidth substantially. This paper presents a new Rogowski coil design which allows to measure high current gradients close to high voltage gradients without interferences and without reducing the bandwidth by a shielding. With this new measurement technique, it is possible to solve the mentioned dilemma and to get ready to measure the current of modern power semiconductors such as SiC and GaN with a Rogowski current transducer.",
"title": ""
},
{
"docid": "1d8765a407f2b9f8728982f54ddb6ae1",
"text": "Objective: To transform heterogeneous clinical data from electronic health records into clinically meaningful constructed features using data driven method that rely, in part, on temporal relations among data. Materials and Methods: The clinically meaningful representations of medical concepts and patients are the key for health analytic applications. Most of existing approaches directly construct features mapped to raw data (e.g., ICD or CPT codes), or utilize some ontology mapping such as SNOMED codes. However, none of the existing approaches leverage EHR data directly for learning such concept representation. We propose a new way to represent heterogeneous medical concepts (e.g., diagnoses, medications and procedures) based on co-occurrence patterns in longitudinal electronic health records. The intuition behind the method is to map medical concepts that are co-occuring closely in time to similar concept vectors so that their distance will be small. We also derive a simple method to construct patient vectors from the related medical concept vectors. Results: We evaluate similar medical concepts across diagnosis, medication and procedure. The results show xx% relevancy between similar pairs of medical concepts. Our proposed representation significantly improves the predictive modeling performance for onset of heart failure (HF), where classification methods (e.g. logistic regression, neural network, support vector machine and K-nearest neighbors) achieve up to 23% improvement in area under the ROC curve (AUC) using this proposed representation. Conclusion: We proposed an effective method for patient and medical concept representation learning. The resulting representation can map relevant concepts together and also improves predictive modeling performance.",
"title": ""
},
{
"docid": "d0e8265bf57729b74375c9b476c4b028",
"text": "As experts in the health care of children and adolescents, pediatricians may be called on to advise legislators concerning the potential impact of changes in the legal status of marijuana on adolescents. Parents, too, may look to pediatricians for advice as they consider whether to support state-level initiatives that propose to legalize the use of marijuana for medical purposes or to decriminalize possession of small amounts of marijuana. This policy statement provides the position of the American Academy of Pediatrics on the issue of marijuana legalization, and the accompanying technical report (available online) reviews what is currently known about the relationship between adolescents' use of marijuana and its legal status to better understand how change might influence the degree of marijuana use by adolescents in the future.",
"title": ""
},
{
"docid": "776b1f07dfd93ff78e97a6a90731a15b",
"text": "Precise destination prediction of taxi trajectories can benefit many intelligent location based services such as accurate ad for passengers. Traditional prediction approaches, which treat trajectories as one-dimensional sequences and process them in single scale, fail to capture the diverse two-dimensional patterns of trajectories in different spatial scales. In this paper, we propose T-CONV which models trajectories as two-dimensional images, and adopts multi-layer convolutional neural networks to combine multi-scale trajectory patterns to achieve precise prediction. Furthermore, we conduct gradient analysis to visualize the multi-scale spatial patterns captured by T-CONV and extract the areas with distinct influence on the ultimate prediction. Finally, we integrate multiple local enhancement convolutional fields to explore these important areas deeply for better prediction. Comprehensive experiments based on real trajectory data show that T-CONV can achieve higher accuracy than the state-of-the-art methods.",
"title": ""
},
{
"docid": "1057ed913b857d0b22f5c535f919d035",
"text": "The purpose of this series is to convey the principles governing our aesthetic senses. Usually meaning visual perception, aesthetics is not merely limited to the ocular apparatus. The concept of aesthetics encompasses both the time-arts such as music, theatre, literature and film, as well as space-arts such as paintings, sculpture and architecture.",
"title": ""
},
{
"docid": "c4ad78f8d997fbbca0f376557276218c",
"text": "To coupe with the difficulties in the process of inspection and classification of defects in Printed Circuit Board (PCB), other researchers have proposed many methods. However, few of them published their dataset before, which hindered the introduction and comparison of new methods. In this paper, we published a synthesized PCB dataset containing 1386 images with 6 kinds of defects for the use of detection, classification and registration tasks. Besides, we proposed a reference based method to inspect and trained an end-to-end convolutional neural network to classify the defects. Unlike conventional approaches that require pixel-by-pixel processing, our method firstly locate the defects and then classify them by neural networks, which shows superior performance on our dataset.",
"title": ""
},
{
"docid": "e9d42505aebdcd2307852cf13957d407",
"text": "We report a broadband polarization-independent perfect absorber with wide-angle near unity absorbance in the visible regime. Our structure is composed of an array of thin Au squares separated from a continuous Au film by a phase change material (Ge2Sb2Te5) layer. It shows that the near perfect absorbance is flat and broad over a wide-angle incidence up to 80° for either transverse electric or magnetic polarization due to a high imaginary part of the dielectric permittivity of Ge2Sb2Te5. The electric field, magnetic field and current distributions in the absorber are investigated to explain the physical origin of the absorbance. Moreover, we carried out numerical simulations to investigate the temporal variation of temperature in the Ge2Sb2Te5 layer and to show that the temperature of amorphous Ge2Sb2Te5 can be raised from room temperature to > 433 K (amorphous-to-crystalline phase transition temperature) in just 0.37 ns with a low light intensity of 95 nW/μm(2), owing to the enhanced broadband light absorbance through strong plasmonic resonances in the absorber. The proposed phase-change metamaterial provides a simple way to realize a broadband perfect absorber in the visible and near-infrared (NIR) regions and is important for a number of applications including thermally controlled photonic devices, solar energy conversion and optical data storage.",
"title": ""
},
{
"docid": "772b3f74b6eecf82099b2e5b3709e507",
"text": "A common prerequisite for many vision-based driver assistance systems is the knowledge of the vehicle's own movement. In this paper we propose a novel approach for estimating the egomotion of the vehicle from a sequence of stereo images. Our method is directly based on the trifocal geometry between image triples, thus no time expensive recovery of the 3-dimensional scene structure is needed. The only assumption we make is a known camera geometry, where the calibration may also vary over time. We employ an Iterated Sigma Point Kalman Filter in combination with a RANSAC-based outlier rejection scheme which yields robust frame-to-frame motion estimation even in dynamic environments. A high-accuracy inertial navigation system is used to evaluate our results on challenging real-world video sequences. Experiments show that our approach is clearly superior compared to other filtering techniques in terms of both, accuracy and run-time.",
"title": ""
},
{
"docid": "dc91774abd58e19066a110bbff9fa306",
"text": "Autonomous Vehicle (AV) or self-driving vehicle technology promises to provide many economical and societal benefits and impacts. Safety is on the top of these benefits. Trajectory or path planning is one of the essential and critical tasks in operating the autonomous vehicle. In this paper we are tackling the problem of trajectory planning for fully-autonomous vehicles. Our use cases are designed for autonomous vehicles in a cloud based connected vehicle environment. This paper presents a method for selecting safe-optimal trajectory in autonomous vehicles. Selecting the safe trajectory in our work mainly based on using Big Data mining and analysis of real-life accidents data and real-time connected vehicles' data. The decision of selecting this trajectory is done automatically without any human intervention. The human touches in this scenario could be only at defining and prioritizing the driving preferences and concerns at the beginning of the planned trip. Safety always overrides the ranked user preferences listed in this work. The output of this work is a safe trajectory that represented by the position, ETA, distance, and the estimated fuel consumption for the entire trip.",
"title": ""
},
{
"docid": "f0f7bd0223d69184f3391aaf790a984d",
"text": "Smart buildings equipped with state-of-the-art sensors and meters are becoming more common. Large quantities of data are being collected by these devices. For a single building to benefit from its own collected data, it will need to wait for a long time to collect sufficient data to build accurate models to help improve the smart buildings systems. Therefore, multiple buildings need to cooperate to amplify the benefits from the collected data and speed up the model building processes. Apparently, this is not so trivial and there are associated challenges. In this paper, we study the importance of collaborative data analytics for smart buildings, its benefits, as well as presently possible models of carrying it out. Furthermore, we present a framework for collaborative fault detection and diagnosis as a case of collaborative data analytics for smart buildings. We also provide a preliminary analysis of the energy efficiency benefit of such collaborative framework for smart buildings. The result shows that significant energy savings can be achieved for smart buildings using collaborative data analytics.",
"title": ""
},
{
"docid": "e462c0cfc1af657cb012850de1b7b717",
"text": "ASSOCIATIONS BETWEEN PHYSICAL ACTIVITY, PHYSICAL FITNESS, AND FALLS RISK IN HEALTHY OLDER INDIVIDUALS Christopher Deane Vaughan Old Dominion University, 2016 Chair: Dr. John David Branch Objective: The purpose of this study was to assess relationships between objectively measured physical activity, physical fitness, and the risk of falling. Methods: A total of n=29 subjects completed the study, n=15 male and n=14 female age (mean±SD)= 70± 4 and 71±3 years, respectively. In a single testing session, subjects performed pre-post evaluations of falls risk (Short-from PPA) with a 6-minute walking intervention between the assessments. The falls risk assessment included tests of balance, knee extensor strength, proprioception, reaction time, and visual contrast. The sub-maximal effort 6-minute walking task served as an indirect assessment of cardiorespiratory fitness. Subjects traversed a walking mat to assess for variation in gait parameters during the walking task. Additional center of pressure (COP) balance measures were collected via forceplate during the falls risk assessments. Subjects completed a Modified Falls Efficacy Scale (MFES) falls confidence survey. Subjects’ falls histories were also collected. Subjects wore hip mounted accelerometers for a 7-day period to assess time spent in moderate to vigorous physical activity (MVPA). Results: Males had greater body mass and height than females (p=0.001, p=0.001). Males had a lower falls risk than females at baseline (p=0.043) and post-walk (p=0.031). MFES scores were similar among all subjects (Median = 10). Falls history reporting revealed; fallers (n=8) and non-fallers (n=21). No significant relationships were found between main outcome measures of MVPA, cardiorespiratory fitness, or falls risk. Fallers had higher knee extensor strength than non-fallers at baseline (p=0.028) and post-walk (p=0.011). Though not significant (p=0.306), fallers spent 90 minutes more time in MVPA than non-fallers (427.8±244.6 min versus 335.7±199.5). Variations in gait and COP variables were not significant. Conclusions: This study found no apparent relationship between objectively measured physical activity, indirectly measured cardiorespiratory fitness, and falls risk.",
"title": ""
},
{
"docid": "b0989fb1775c486317b5128bc1c31c76",
"text": "Corporates are entering the brave new world of the internet and digitization without much regard for the fine print of a growing regulation regime. More traditional outsourcing arrangements are already falling foul of the regulators as rules and supervision intensifies. Furthermore, ‘shadow IT’ is proliferating as the attractions of SaaS, mobile, cloud services, social media, and endless new ‘apps’ drive usage outside corporate IT. Initial cost-benefit analyses of the Cloud make such arrangements look immediately attractive but losing control of architecture, security, applications and deployment can have far reaching and damaging regulatory consequences. From research in financial services, this paper details the increasing body of regulations, their inherent risks for businesses and how the dangers can be pre-empted and managed. We then delineate a model for managing these risks specifically focused on investigating, strategizing and governing outsourcing arrangements and related regulatory obligations.",
"title": ""
},
{
"docid": "ade3f3c778cf29e7c03bf96196916d6d",
"text": "Selection and use of pattern recognition algorithms is application dependent. In this work, we explored the use of several ensembles of weak classifiers to classify signals captured from a wearable sensor system to detect food intake based on chewing. Three sensor signals (Piezoelectric sensor, accelerometer, and hand to mouth gesture) were collected from 12 subjects in free-living conditions for 24 hrs. Sensor signals were divided into 10 seconds epochs and for each epoch combination of time and frequency domain features were computed. In this work, we present a comparison of three different ensemble techniques: boosting (AdaBoost), bootstrap aggregation (bagging) and stacking, each trained with 3 different weak classifiers (Decision Trees, Linear Discriminant Analysis (LDA) and Logistic Regression). Type of feature normalization used can also impact the classification results. For each ensemble method, three feature normalization techniques: (no-normalization, z-score normalization, and minmax normalization) were tested. A 12 fold cross-validation scheme was used to evaluate the performance of each model where the performance was evaluated in terms of precision, recall, and accuracy. Best results achieved here show an improvement of about 4% over our previous algorithms.",
"title": ""
},
{
"docid": "86bbaffa7e9a58c06d695443224cbf01",
"text": "Movie studios often have to choose among thousands of scripts to decide which ones to turn into movies. Despite the huge amount of money at stake, this process, known as “green-lighting” in the movie industry, is largely a guesswork based on experts’ experience and intuitions. In this paper, we propose a new approach to help studios evaluate scripts which will then lead to more profitable green-lighting decisions. Our approach combines screenwriting domain knowledge, natural language processing techniques, and statistical learning methods to forecast a movie’s return-on-investment based only on textual information available in movie scripts. We test our model in a holdout decision task to show that our model is able to improve a studio’s gross return-on-investment significantly.",
"title": ""
},
{
"docid": "d5bc3147e23f95a070bce0f37a96c2a8",
"text": "This paper presents a fully integrated wideband current-mode digital polar power amplifier (DPA) in CMOS with built-in AM–PM distortion self-compensation. Feedforward capacitors are implemented in each differential cascode digital power cell. These feedforward capacitors operate together with a proposed DPA biasing scheme to minimize the DPA output device capacitance <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations over a wide output power range and a wide carrier frequency bandwidth, resulting in DPA AM–PM distortion reduction. A three-coil transformer-based DPA output passive network is implemented within a single transformer footprint (330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m} \\,\\, \\times $ </tex-math></inline-formula> 330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula>) and provides parallel power combining and load impedance transformation with a low loss, an octave bandwidth, and a large impedance transformation ratio. Moreover, this proposed power amplifier (PA) output passive network shows a desensitized phase response to <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations and further suppresses the DPA AM–PM distortion. Both proposed AM–PM distortion self-compensation techniques are effective for a large carrier frequency range and a wide modulation bandwidth, and are independent of the DPA AM control codes. This results in a superior inherent DPA phase linearity and reduces or even eliminates the need for phase pre-distortion, which dramatically simplifies the DPA pre-distortion computations. As a proof-of-concept, a 2–4.3 GHz wideband DPA is implemented in a standard 28-nm bulk CMOS process. Operating with a low supply voltage of 1.4 V for enhanced reliability, the DPA demonstrates ±0.5 dB PA output power bandwidth from 2 to 4.3 GHz with +24.9 dBm peak output power at 3.1 GHz. The measured peak PA drain efficiency is 42.7% at 2.5 GHz and is more than 27% from 2 to 4.3 GHz. The measured PA AM–PM distortion is within 6.8° at 2.8 GHz over the PA output power dynamic range of 25 dB, achieving the lowest AM–PM distortion among recently reported current-mode DPAs in the same frequency range. Without any phase pre-distortion, modulation measurements with a 20-MHz 802.11n standard compliant signal demonstrate 2.95% rms error vector magnitude, −33.5 dBc adjacent channel leakage ratio, 15.6% PA drain efficiency, and +14.6 dBm PA average output power at 2.8 GHz.",
"title": ""
},
{
"docid": "e36e318dd134fd5840d5a5340eb6e265",
"text": "Business Intelligence (BI) promises a range of technologies for using information to ensure compliance to strategic and tactical objectives, as well as government laws and regulations. These technologies can be used in conjunction with conceptual models of business objectives, processes and situations (aka business schemas) to drive strategic decision-making about opportunities and threats etc. This paper focuses on three key concepts for strategic business models -situation, influence and indicator -and how they are used for strategic analysis. The semantics of these concepts are defined using a state-ofthe-art upper ontology (DOLCE+). We also propose a method for building a business schema, and demonstrate alternative ways of formal analysis of the schema based on existing tools for goal and probabilistic reasoning.",
"title": ""
},
{
"docid": "8d99f6fd95fb329e16294b7884090029",
"text": "The accurate diagnosis of Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment, is essential for timely treatment and possible delay of AD. Fusion of multimodal neuroimaging data, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), has shown its effectiveness for AD diagnosis. The deep polynomial networks (DPN) is a recently proposed deep learning algorithm, which performs well on both large-scale and small-size datasets. In this study, a multimodal stacked DPN (MM-SDPN) algorithm, which MM-SDPN consists of two-stage SDPNs, is proposed to fuse and learn feature representation from multimodal neuroimaging data for AD diagnosis. Specifically speaking, two SDPNs are first used to learn high-level features of MRI and PET, respectively, which are then fed to another SDPN to fuse multimodal neuroimaging information. The proposed MM-SDPN algorithm is applied to the ADNI dataset to conduct both binary classification and multiclass classification tasks. Experimental results indicate that MM-SDPN is superior over the state-of-the-art multimodal feature-learning-based algorithms for AD diagnosis.",
"title": ""
}
] | scidocsrr |
a7ce59adc981813107323821e694c2f8 | A Bistatic SAR Raw Data Simulator Based on Inverse $ \omega{-}k$ Algorithm | [
{
"docid": "b3e1bdd7cfca17782bde698297e191ab",
"text": "Synthetic aperture radar (SAR) raw signal simulation is a powerful tool for designing new sensors, testing processing algorithms, planning missions, and devising inversion algorithms. In this paper, a spotlight SAR raw signal simulator for distributed targets is presented. The proposed procedure is based on a Fourier domain analysis: a proper analytical reformulation of the spotlight SAR raw signal expression is presented. It is shown that this reformulation allows us to design a very efficient simulation scheme that employs fast Fourier transform codes. Accordingly, the computational load is dramatically reduced with respect to a time-domain simulation and this, for the first time, makes spotlight simulation of extended scenes feasible.",
"title": ""
}
] | [
{
"docid": "8bc095fca33d850db89ffd15a84335dc",
"text": "There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant's components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid.",
"title": ""
},
{
"docid": "b77d297feeff92a2e7b03bf89b5f20db",
"text": "Dependability evaluation main objective is to assess the ability of a system to correctly function over time. There are many possible approaches to the evaluation of dependability: in these notes we are mainly concerned with dependability evaluation based on probabilistic models. Starting from simple probabilistic models with very efficient solution methods we shall then come to the main topic of the paper: how Petri nets can be used to evaluate the dependability of complex systems.",
"title": ""
},
{
"docid": "3182542aa5b500780bb8847178b8ec8d",
"text": "The United States is a diverse country with constantly changing demographics. The noticeable shift in demographics is even more phenomenal among the school-aged population. The increase of ethnic-minority student presence is largely credited to the national growth of the Hispanic population, which exceeded the growth of all other ethnic minority group students in public schools. Scholars have pondered over strategies to assist teachers in teaching about diversity (multiculturalism, racism, etc.) as well as interacting with the diversity found within their classrooms in order to ameliorate the effects of cultural discontinuity. One area that has developed in multicultural education literature is culturally relevant pedagogy (CRP). CRP maintains that teachers need to be non-judgmental and inclusive of the cultural backgrounds of their students in order to be effective facilitators of learning in the classroom. The plethora of literature on CRP, however, has not been presented as a testable theoretical model nor has it been systematically viewed through the lens of critical race theory (CRT). By examining the evolution of CRP among some of the leading scholars, the authors broaden this work through a CRT infusion which includes race and indeed racism as normal parts of American society that have been integrated into the educational system and the systematic aspects of school relationships. Their purpose is to infuse the tenets of CRT into an overview of the literature that supports a conceptual framework for understanding and studying culturally relevant pedagogy. They present a conceptual framework of culturally relevant pedagogy that is grounded in over a quarter of a century of research scholarship. By synthesizing the literature into the five areas and infusing it with the tenets of CRT, the authors have developed a collection of principles that represents culturally relevant pedagogy. (Contains 1 figure and 1 note.) culturally relevant pedagogy | teacher education | student-teacher relationships |",
"title": ""
},
{
"docid": "a0306096725c0d4b6bdd648bfa396f13",
"text": "Graph coloring—also known as vertex coloring—considers the problem of assigning colors to the nodes of a graph such that adjacent nodes do not share the same color. The optimization version of the problem concerns the minimization of the number of colors used. In this paper we deal with the problem of finding valid graphs colorings in a distributed way, that is, by means of an algorithm that only uses local information for deciding the color of the nodes. The algorithm proposed in this paper is inspired by the calling behavior of Japanese tree frogs. Male frogs use their calls to attract females. Interestingly, groups of males that are located near each other desynchronize their calls. This is because female frogs are only able to correctly localize male frogs when their calls are not too close in time. The proposed algorithm makes use of this desynchronization behavior for the assignment of different colors to neighboring nodes. We experimentally show that our algorithm is very competitive with the current state of the art, using different sets of problem instances and comparing to one of the most competitive algorithms from the literature.",
"title": ""
},
{
"docid": "164fd7be21190314a27bacb4dec522c5",
"text": "The relative ineffectiveness of information retrieval systems is largely caused by the inaccuracy with which a query formed by a few keywords models the actual user information need. One well known method to overcome this limitation is automatic query expansion (AQE), whereby the user’s original query is augmented by new features with a similar meaning. AQE has a long history in the information retrieval community but it is only in the last years that it has reached a level of scientific and experimental maturity, especially in laboratory settings such as TREC. This survey presents a unified view of a large number of recent approaches to AQE that leverage various data sources and employ very different principles and techniques. The following questions are addressed. Why is query expansion so important to improve search effectiveness? What are the main steps involved in the design and implementation of an AQE component? What approaches to AQE are available and how do they compare? Which issues must still be resolved before AQE becomes a standard component of large operational information retrieval systems (e.g., search engines)?",
"title": ""
},
{
"docid": "28439c317c1b7f94527db6c2e0edcbd0",
"text": "AnswerBus1 is an open-domain question answering system based on sentence level Web information retrieval. It accepts users’ natural-language questions in English, German, French, Spanish, Italian and Portuguese and provides answers in English. Five search engines and directories are used to retrieve Web pages that are relevant to user questions. From the Web pages, AnswerBus extracts sentences that are determined to contain answers. Its current rate of correct answers to TREC-8’s 200 questions is 70.5% with the average response time to the questions being seven seconds. The performance of AnswerBus in terms of accuracy and response time is better than other similar systems.",
"title": ""
},
{
"docid": "933073c108baa0229c8bcd423ceade47",
"text": "Federated Learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data. We have built a scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow. In this paper, we describe the resulting high-level design, sketch some of the challenges and their solutions, and touch upon the open problems and future directions.",
"title": ""
},
{
"docid": "b7673dbe46a1118511d811241940e328",
"text": "A 100-MHz–2-GHz closed-loop analog in-phase/ quadrature correction circuit for digital clocks is presented. The proposed circuit consists of a phase-locked loop- type architecture for quadrature error correction. The circuit corrects the phase error to within a 1.5° up to 1 GHz and to within 3° at 2 GHz. It consumes 5.4 mA from a 1.2 V supply at 2 GHz. The circuit was designed in UMC 0.13-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> mixed-mode CMOS with an active area of <inline-formula> <tex-math notation=\"LaTeX\">$102\\,\\,\\mu {\\mathrm{ m}} \\times 95\\,\\,\\mu {\\mathrm{ m}}$ </tex-math></inline-formula>. The impact of duty cycle distortion has been analyzed. High-frequency quadrature measurement related issues have been discussed. The proposed circuit was used in two different applications for which the functionality has been verified.",
"title": ""
},
{
"docid": "216c1f8d96e8392fe05e51f556caf2ef",
"text": "The Hypogonadism in Males study estimated the prevalence of hypogonadism [total testosterone (TT) < 300 ng/dl] in men aged > or = 45 years visiting primary care practices in the United States. A blood sample was obtained between 8 am and noon and assayed for TT, free testosterone (FT) and bioavailable testosterone (BAT). Common symptoms of hypogonadism, comorbid conditions, demographics and reason for visit were recorded. Of 2162 patients, 836 were hypogonadal, with 80 receiving testosterone. Crude prevalence rate of hypogonadism was 38.7%. Similar trends were observed for FT and BAT. Among men not receiving testosterone, 756 (36.3%) were hypogonadal; odds ratios for having hypogonadism were significantly higher in men with hypertension (1.84), hyperlipidaemia (1.47), diabetes (2.09), obesity (2.38), prostate disease (1.29) and asthma or chronic obstructive pulmonary disease (1.40) than in men without these conditions. The prevalence of hypogonadism was 38.7% in men aged > or = 45 years presenting to primary care offices.",
"title": ""
},
{
"docid": "ac76a4fe36e95d87f844c6876735b82f",
"text": "Theoretical estimates indicate that graphene thin films can be used as transparent electrodes for thin-film devices such as solar cells and organic light-emitting diodes, with an unmatched combination of sheet resistance and transparency. We demonstrate organic light-emitting diodes with solution-processed graphene thin film transparent conductive anodes. The graphene electrodes were deposited on quartz substrates by spin-coating of an aqueous dispersion of functionalized graphene, followed by a vacuum anneal step to reduce the sheet resistance. Small molecular weight organic materials and a metal cathode were directly deposited on the graphene anodes, resulting in devices with a performance comparable to control devices on indium-tin-oxide transparent anodes. The outcoupling efficiency of devices on graphene and indium-tin-oxide is nearly identical, in agreement with model predictions.",
"title": ""
},
{
"docid": "1ccc1b904fa58b1e31f4f3f4e2d76707",
"text": "When children and adolescents are the target population in dietary surveys many different respondent and observer considerations surface. The cognitive abilities required to self-report food intake include an adequately developed concept of time, a good memory and attention span, and a knowledge of the names of foods. From the age of 8 years there is a rapid increase in the ability of children to self-report food intake. However, while cognitive abilities should be fully developed by adolescence, issues of motivation and body image may hinder willingness to report. Ten validation studies of energy intake data have demonstrated that mis-reporting, usually in the direction of under-reporting, is likely. Patterns of under-reporting vary with age, and are influenced by weight status and the dietary survey method used. Furthermore, evidence for the existence of subject-specific responding in dietary assessment challenges the assumption that repeated measurements of dietary intake will eventually obtain valid data. Unfortunately, the ability to detect mis-reporters, by comparison with presumed energy requirements, is limited unless detailed activity information is available to allow the energy intake of each subject to be evaluated individually. In addition, high variability in nutrient intakes implies that, if intakes are valid, prolonged dietary recording will be required to rank children correctly for distribution analysis. Future research should focus on refining dietary survey methods to make them more sensitive to different ages and cognitive abilities. The development of improved techniques for identification of mis-reporters and investigation of the issue of differential reporting of foods should also be given priority.",
"title": ""
},
{
"docid": "14aefcc95313cecbce5f575fd78a9ae5",
"text": "The Penn Treebank does not annotate within base noun phrases (NPs), committing only to flat structures that ignore the complexity of English NPs. This means that tools trained on Treebank data cannot learn the correct internal structure of NPs. This paper details the process of adding gold-standard bracketing within each noun phrase in the Penn Treebank. We then examine the consistency and reliability of our annotations. Finally, we use this resource to determine NP structure using several statistical approaches, thus demonstrating the utility of the corpus. This adds detail to the Penn Treebank that is necessary for many NLP applications.",
"title": ""
},
{
"docid": "2c63b16ba725f8941f2f9880530911ef",
"text": "To facilitate wireless transmission of multimedia content to mobile users, we propose a content caching and distribution framework for smart grid enabled OFDM networks, where each popular multimedia file is coded and distributively stored in multiple energy harvesting enabled serving nodes (SNs), and the green energy distributively harvested by SNs can be shared with each other through the smart grid. The distributive caching, green energy sharing, and the on-grid energy backup have improved the reliability and performance of the wireless multimedia downloading process. To minimize the total on-grid power consumption of the whole network, while guaranteeing that each user can retrieve the whole content, the user association scheme is jointly designed with consideration of resource allocation, including subchannel assignment, power allocation, and power flow among nodes. Simulation results demonstrate that bringing content, green energy, and SN closer to the end user can notably reduce the on-grid energy consumption.",
"title": ""
},
{
"docid": "f4c1a8b19248e0cb8e2791210715e7b7",
"text": "The translation of proper names is one of the most challenging activities every translator faces. While working on children’s literature, the translation is especially complicated since proper names usually have various allusions indicating sex, age, geographical belonging, history, specific meaning, playfulness of language and cultural connotations. The goal of this article is to draw attention to strategic choices for the translation of proper names in children’s literature. First, the article presents the theoretical considerations that deal with different aspects of proper names in literary works and the issue of their translation. Second, the translation strategies provided by the translation theorist Eirlys E. Davies used for this research are explained. In addition, the principles of adaptation of proper names provided the State Commission of the Lithuanian Language are presented. Then, the discussion proceeds to the quantitative analysis of the translated proper names with an emphasis on providing and explaining numerous examples. The research has been carried out on four popular fantasy books translated from English and German by three Lithuanian translators. After analyzing the strategies of preservation, localization, transformation and creation, the strategy of localization has proved to be the most frequent one in all translations.",
"title": ""
},
{
"docid": "0170bcdc662628fb46142e62bc8e011d",
"text": "Agriculture is the sole provider of human food. Most farm machines are driven by fossil fuels, which contribute to greenhouse gas emissions and, in turn, accelerate climate change. Such environmental damage can be mitigated by the promotion of renewable resources such as solar, wind, biomass, tidal, geo-thermal, small-scale hydro, biofuels and wave-generated power. These renewable resources have a huge potential for the agriculture industry. The farmers should be encouraged by subsidies to use renewable energy technology. The concept of sustainable agriculture lies on a delicate balance of maximizing crop productivity and maintaining economic stability, while minimizing the utilization of finite natural resources and detrimental environmental impacts. Sustainable agriculture also depends on replenishing the soil while minimizing the use of non-renewable resources, such as natural gas, which is used in converting atmospheric nitrogen into synthetic fertilizer, and mineral ores, e.g. phosphate or fossil fuel used in diesel generators for water pumping for irrigation. Hence, there is a need for promoting use of renewable energy systems for sustainable agriculture, e.g. solar photovoltaic water pumps and electricity, greenhouse technologies, solar dryers for post-harvest processing, and solar hot water heaters. In remote agricultural lands, the underground submersible solar photovoltaic water pump is economically viable and also an environmentally-friendly option as compared with a diesel generator set. If there are adverse climatic conditions for the growth of particular plants in cold climatic zones then there is need for renewable energy technology such as greenhouses for maintaining the optimum plant ambient temperature conditions for the growth of plants and vegetables. The economics of using greenhouses for plants and vegetables, and solar photovoltaic water pumps for sustainable agriculture and the environment are presented in this article. Clean development provides industrialized countries with an incentive to invest in emission reduction projects in developing countries to achieve a reduction in CO2 emissions at the lowest cost. The mechanism of clean development is discussed in brief for the use of renewable systems for sustainable agricultural development specific to solar photovoltaic water pumps in India and the world. This article explains in detail the role of renewable energy in farming by connecting all aspects of agronomy with ecology, the environment, economics and societal change.",
"title": ""
},
{
"docid": "afcfe379acfd727b6044c70478b3c2a3",
"text": "We present SfSNet, an end-to-end learning framework for producing an accurate decomposition of an unconstrained human face image into shape, reflectance and illuminance. SfSNet is designed to reflect a physical lambertian rendering model. SfSNet learns from a mixture of labeled synthetic and unlabeled real world images. This allows the network to capture low frequency variations from synthetic and high frequency details from real images through the photometric reconstruction loss. SfSNet consists of a new decomposition architecture with residual blocks that learns a complete separation of albedo and normal. This is used along with the original image to predict lighting. SfSNet produces significantly better quantitative and qualitative results than state-of-the-art methods for inverse rendering and independent normal and illumination estimation.",
"title": ""
},
{
"docid": "0d1f9b3fa3d03b37438024ba354ca68a",
"text": "Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself. Consequently, we must search the space of programs for those that output the correct result, while not being misled by spurious programs: incorrect programs that coincidentally output the correct result. We connect two common learning paradigms, reinforcement learning (RL) and maximum marginal likelihood (MML), and then present a new learning algorithm that combines the strengths of both. The new algorithm guards against spurious programs by combining the systematic search traditionally employed in MML with the randomized exploration of RL, and by updating parameters such that probability is spread more evenly across consistent programs. We apply our learning algorithm to a new neural semantic parser and show significant gains over existing state-of-theart results on a recent context-dependent semantic parsing task.",
"title": ""
},
{
"docid": "c85e5745141e64e224a5c4c61f1b1866",
"text": "Crowd-sourcing has become a popular means of acquiring labeled data for many tasks where humans are more accurate than computers, such as image tagging, entity resolution, or sentiment analysis. However, due to the time and cost of human labor, solutions that solely rely on crowd-sourcing are often limited to small datasets (i.e., a few thousand items). This paper proposes algorithms for integrating machine learning into crowd-sourced databases in order to combine the accuracy of human labeling with the speed and cost-effectiveness of machine learning classifiers. By using active learning as our optimization strategy for labeling tasks in crowdsourced databases, we can minimize the number of questions asked to the crowd, allowing crowd-sourced applications to scale (i.e, label much larger datasets at lower costs). Designing active learning algorithms for a crowd-sourced database poses many practical challenges: such algorithms need to be generic, scalable, and easy-to-use for a broad range of practitioners, even those who are not machine learning experts. We draw on the theory of nonparametric bootstrap to design, to the best of our knowledge, the first active learning algorithms that meet all these requirements. Our results, on 3 real-world datasets collected with Amazon’s Mechanical Turk, and on 15 UCI datasets, show that our methods on average ask 1–2 orders of magnitude fewer questions than the baseline, and 4.5–44× fewer than existing active learning algorithms.",
"title": ""
},
{
"docid": "c4e6176193677f62f6b33dc02580c7f2",
"text": "E-learning has become an essential factor in the modern educational system. In today's diverse student population, E-learning must recognize the differences in student personalities to make the learning process more personalized. The objective of this study is to create a data model to identify both the student personality type and the dominant preference based on the Myers-Briggs Type Indicator (MBTI) theory. The proposed model utilizes data from student engagement with the learning management system (Moodle) and the social network, Facebook. The model helps students become aware of their personality, which in turn makes them more efficient in their study habits. The model also provides vital information for educators, equipping them with a better understanding of each student's personality. With this knowledge, educators will be more capable of matching students with their respective learning styles. The proposed model was applied on a sample data collected from the Business College at the German university in Cairo, Egypt (240 students). The model was tested using 10 data mining classification algorithms which were NaiveBayes, BayesNet, Kstar, Random forest, J48, OneR, JRIP, KNN /IBK, RandomTree, Decision Table. The results showed that OneR had the best accuracy percentage of 97.40%, followed by Random forest 93.23% and J48 92.19%.",
"title": ""
}
] | scidocsrr |
381a180ecd74e87262ec5c5be0ccbe97 | Facial Action Coding System | [
{
"docid": "6b6285cd8512a2376ae331fda3fedf20",
"text": "The Facial Action Coding System (FACS) (Ekman & Friesen, 1978) is a comprehensive and widely used method of objectively describing facial activity. Little is known, however, about inter-observer reliability in coding the occurrence, intensity, and timing of individual FACS action units. The present study evaluated the reliability of these measures. Observational data came from three independent laboratory studies designed to elicit a wide range of spontaneous expressions of emotion. Emotion challenges included olfactory stimulation, social stress, and cues related to nicotine craving. Facial behavior was video-recorded and independently scored by two FACS-certified coders. Overall, we found good to excellent reliability for the occurrence, intensity, and timing of individual action units and for corresponding measures of more global emotion-specified combinations.",
"title": ""
}
] | [
{
"docid": "a65d1881f5869f35844064d38b684ac8",
"text": "Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.",
"title": ""
},
{
"docid": "8fc758632346ce45e8f984018cde5ece",
"text": "Today Recommendation systems [3] have become indispensible because of the sheer bulk of information made available to a user from web-services(Netflix, IMDB, Amazon and many others) and the need for personalized suggestions. Recommendation systems are a well studied research area. In the following work, we present our study on the Netflix Challenge [1]. The Neflix Challenge can be summarized in the following way: ”Given a movie, predict the rating of a particular user based on the user’s prior ratings”. The performance of all such approaches is measured using the RMSE (root mean-squared error) of the submitted ratings from the actual ratings. Currently, the best system has an RMSE of 0.8616 [2]. We obtained ratings from the following approaches:",
"title": ""
},
{
"docid": "c197198ca45acec2575d5be26fc61f36",
"text": "General systems theory has been proposed as a basis for the unification of science. The open systems model has stimulated many new conceptualizations in organization theory and management practice. However, experience in utilizing these concepts suggests many unresolved dilemmas. Contingency views represent a step toward less abstraction, more explicit patterns of relationships, and more applicable theory. Sophistication will come when we have a more complete understanding of organizations as total systems (configurations of subsystems) so that we can prescribe more appropriate organizational designs and managerial systems. Ultimately, organization theory should serve as the foundation for more effective management practice.",
"title": ""
},
{
"docid": "12eff845ccb6e5cc2b2fbe74935aff46",
"text": "The study of this paper presents a new technique to use automatic number plate detection and recognition. This system plays a significant role throughout this busy world, owing to rise in use of vehicles day-by-day. Some of the applications of this software are automatic toll tax collection, unmanned parking slots, safety, and security. The current scenario happening in India is, people, break the rules of the toll and move away which can cause many serious issues like accidents. This system uses efficient algorithms to detect the vehicle number from real-time images. The system detects the license plate on the vehicle first and then captures the image of it. Vehicle number plate is localized and characters are segmented and further recognized with help of neural network. The system is designed for grayscale images so it detects the number plate regardless of its color. The resulting vehicle number plate is then compared with the available database of all vehicles which have been already registered by the users so as to come up with information about vehicle type and charge accordingly. The vehicle information such as date, toll amount is stored in the database to maintain the record.",
"title": ""
},
{
"docid": "5f20ed750fc260f40d01e8ac5ddb633d",
"text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii CHAPTER",
"title": ""
},
{
"docid": "f1cfd3980bb7dc78309074012be3cf03",
"text": "A chatbot is a conversational agent that interacts with users using natural language. Multi chatbots are available to serve in different domains. However, the knowledge base of chatbots is hand coded in its brain. This paper presents an overview of ALICE chatbot, its AIML format, and our experiments to generate different prototypes of ALICE automatically based on a corpus approach. A description of developed software which converts readable text (corpus) into AIML format is presented alongside with describing the different corpora we used. Our trials revealed the possibility of generating useful prototypes without the need for sophisticated natural language processing or complex machine learning techniques. These prototypes were used as tools to practice different languages, to visualize corpus, and to provide answers for questions.",
"title": ""
},
{
"docid": "22ad4568fbf424592c24783fb3037f62",
"text": "We propose an unsupervised learning technique for extracting information about authors and topics from large text collections. We model documents as if they were generated by a two-stage stochastic process. An author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words. The probability distribution over topics in a multi-author paper is a mixture of the distributions associated with the authors. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to three large text corpora: 150,000 abstracts from the CiteSeer digital library, 1740 papers from the Neural Information Processing Systems (NIPS) Conferences, and 121,000 emails from the Enron corporation. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, parsing of abstracts by topics and authors, and detection of unusual papers by specific authors. Experiments based on perplexity scores for test documents and precision-recall for document retrieval are used to illustrate systematic differences between the proposed author-topic model and a number of alternatives. Extensions to the model, allowing for example, generalizations of the notion of an author, are also briefly discussed.",
"title": ""
},
{
"docid": "34bfec0f1f7eb748b3632bbf288be3bd",
"text": "An omnidirectional mobile robot is able, kinematically, to move in any direction regardless of current pose. To date, nearly all designs and analyses of omnidirectional mobile robots have considered the case of motion on flat, smooth terrain. In this paper, an investigation of the design and control of an omnidirectional mobile robot for use in rough terrain is presented. Kinematic and geometric properties of the active split offset caster drive mechanism are investigated along with system and subsystem design guidelines. An optimization method is implemented to explore the design space. The use of this method results in a robot that has higher mobility than a robot designed using engineering judgment. A simple kinematic controller that considers the effects of terrain unevenness via an estimate of the wheel-terrain contact angles is also presented. It is shown in simulation that under the proposed control method, near-omnidirectional tracking performance is possible even in rough, uneven terrain. DOI: 10.1115/1.4000214",
"title": ""
},
{
"docid": "e364db9141c85b1f260eb3a9c1d42c5b",
"text": "Ten US presidential elections ago in Chapel Hill, North Carolina, the agenda of issues that a small group of undecided voters regarded as the most important ones of the day was compared with the news coverage of public issues in the news media these voters used to follow the campaign (McCombs and Shaw, 1972). Since that election, the principal finding in Chapel Hill*/those aspects of public affairs that are prominent in the news become prominent among the public*/has been replicated in hundreds of studies worldwide. These replications include both election and non-election settings for a broad range of public issues and other aspects of political communication and extend beyond the United States to Europe, Asia, Latin America and Australia. Recently, as the news media have expanded to include online newspapers available on the Web, agenda-setting effects have been documented for these new media. All in all, this research has grown far beyond its original domain*/the transfer of salience from the media agenda to the public agenda*/and now encompasses five distinct stages of theoretical attention. Until very recently, the ideas and findings that detail these five stages of agenda-setting theory have been scattered in a wide variety of research journals, book chapters and books published in many different countries. As a result, knowledge of agenda setting has been very unevenly distributed. Scholars designing new studies often had incomplete knowledge of previous research, and graduate students entering the field of mass communication had difficulty learning in detail what we know about the agenda-setting role of the mass media. This situation was my incentive to write Setting the Agenda: the mass media and public opinion, which was published in England in late 2004 and in the United States early in 2005. My primary goal was to gather the principal ideas and empirical findings about agenda setting in one place. John Pavlik has described this integrated presentation as the Gray’s Anatomy of agenda setting (McCombs, 2004, p. xii). Shortly after the US publication of Setting the Agenda , I received an invitation from Journalism Studies to prepare an overview of agenda setting. The timing was wonderfully fortuitous because a book-length presentation of what we have learned in the years since Chapel Hill could be coupled with a detailed discussion in a major journal of current trends and future likely directions in agenda-setting research. Journals are the best venue for advancing the stepby-step accretion of knowledge because they typically reach larger audiences than books, generate more widespread discussion and offer more space for the focused presentation of a particular aspect of a research area. Books can then periodically distill this knowledge. Given the availability of a detailed overview in Setting the Agenda , the presentation here of the five stages of agenda-setting theory emphasizes current and near-future research questions in these areas. Moving beyond these specific Journalism Studies, Volume 6, Number 4, 2005, pp. 543 557",
"title": ""
},
{
"docid": "abdffec5ea2b05b61006cc7b6b295976",
"text": "Making recommendation requires predicting what is of interest to a user at a specific time. Even the same user may have different desires at different times. It is important to extract the aggregate interest of a user from his or her navigational path through the site in a session. This paper concentrates on the discovery and modelling of the user’s aggregate interest in a session. This approach relies on the premise that the visiting time of a page is an indicator of the user’s interest in that page. The proportion of times spent in a set of pages requested by the user within a single session forms the aggregate interest of that user in that session. We first partition user sessions into clusters such that only sessions which represent similar aggregate interest of users are placed in the same cluster. We employ a model-based clustering approach and partition user sessions according to similar amount of time in similar pages. In particular, we cluster sessions by learning a mixture of Poisson models using Expectation Maximization algorithm. The resulting clusters are then used to recommend pages to a user that are most likely contain the information which is of interest to that user at that time. Although the approach does not use the sequential patterns of transactions, experimental evaluation shows that the approach is quite effective in capturing a Web user’s access pattern. The model has an advantage over previous proposals in terms of speed and memory usage.",
"title": ""
},
{
"docid": "53b48550158b06dfbdb8c44a4f7241c6",
"text": "The primary aim of the study was to examine the relationship between media exposure and body image in adolescent girls, with a particular focus on the ‘new’ and as yet unstudied medium of the Internet. A sample of 156 Australian female high school students (mean age= 14.9 years) completed questionnaire measures of media consumption and body image. Internet appearance exposure and magazine reading, but not television exposure, were found to be correlated with greater internalization of thin ideals, appearance comparison, weight dissatisfaction, and drive for thinness. Regression analyses indicated that the effects of magazines and Internet exposure were mediated by internalization and appearance comparison. It was concluded that the Internet represents a powerful sociocultural influence on young women’s lives.",
"title": ""
},
{
"docid": "f3b0bace6028b3d607618e2e53294704",
"text": "State-of-the art spoken language understanding models that automatically capture user intents in human to machine dialogs are trained with manually annotated data, which is cumbersome and time-consuming to prepare. For bootstrapping the learning algorithm that detects relations in natural language queries to a conversational system, one can rely on publicly available knowledge graphs, such as Freebase, and mine corresponding data from the web. In this paper, we present an unsupervised approach to discover new user intents using a novel Bayesian hierarchical graphical model. Our model employs search query click logs to enrich the information extracted from bootstrapped models. We use the clicked URLs as implicit supervision and extend the knowledge graph based on the relational information discovered from this model. The posteriors from the graphical model relate the newly discovered intents with the search queries. These queries are then used as additional training examples to complement the bootstrapped relation detection models. The experimental results demonstrate the effectiveness of this approach, showing extended coverage to new intents without impacting the known intents.",
"title": ""
},
{
"docid": "6efdf43a454ce7da51927c07f1449695",
"text": "We investigate efficient representations of functions that can be written as outputs of so-called sum-product networks, that alternate layers of product and sum operations (see Fig 1 for a simple sum-product network). We find that there exist families of such functions that can be represented much more efficiently by deep sum-product networks (i.e. allowing multiple hidden layers), compared to shallow sum-product networks (constrained to using a single hidden layer). For instance, there is a family of functions fn where n is the number of input variables, such that fn can be computed with a deep sum-product network of log 2 n layers and n−1 units, while a shallow sum-product network (two layers) requires 2 √ n−1 units. These mathematical results are in the same spirit as those by H̊astad and Goldmann (1991) on the limitations of small depth computational circuits. They motivate using deep networks to be able to model complex functions more efficiently than with shallow networks. Exponential gains in terms of the number of parameters are quite significant in the context of statistical machine learning. Indeed, the number of training samples required to optimize a model’s parameters without suffering from overfitting typically increases with the number of parameters. Deep networks thus offer a promising way to learn complex functions from limited data, even though parameter optimization may still be challenging.",
"title": ""
},
{
"docid": "296025d4851569031f0ebe36d792fadc",
"text": "In this paper we present the first, to the best of our knowledge, discourse parser that is able to predict non-tree DAG structures. We use Integer Linear Programming (ILP) to encode both the objective function and the constraints as global decoding over local scores. Our underlying data come from multi-party chat dialogues, which require the prediction of DAGs. We use the dependency parsing paradigm, as has been done in the past (Muller et al., 2012; Li et al., 2014; Afantenos et al., 2015), but we use the underlying formal framework of SDRT and exploit SDRT’s notions of left and right distributive relations. We achieve an Fmeasure of 0.531 for fully labeled structures which beats the previous state of the art.",
"title": ""
},
{
"docid": "496ba5ee48281afe48b5afce02cc4dbf",
"text": "OBJECTIVE\nThis study examined the relationship between reported exposure to child abuse and a history of parental substance abuse (alcohol and drugs) in a community sample in Ontario, Canada.\n\n\nMETHOD\nThe sample consisted of 8472 respondents to the Ontario Mental Health Supplement (OHSUP), a comprehensive population survey of mental health. The association of self-reported retrospective childhood physical and sexual abuse and parental histories of drug or alcohol abuse was examined.\n\n\nRESULTS\nRates of physical and sexual abuse were significantly higher, with a more than twofold increased risk among those reporting parental substance abuse histories. The rates were not significantly different between type or severity of abuse. Successively increasing rates of abuse were found for those respondents who reported that their fathers, mothers or both parents had substance abuse problems; this risk was significantly elevated for both parents compared to father only with substance abuse problem.\n\n\nCONCLUSIONS\nParental substance abuse is associated with a more than twofold increase in the risk of exposure to both childhood physical and sexual abuse. While the mechanism for this association remains unclear, agencies involved in child protection or in treatment of parents with substance abuse problems must be cognizant of this relationship and focus on the development of interventions to serve these families.",
"title": ""
},
{
"docid": "461ec14463eb20962ef168de781ac2a2",
"text": "Local descriptors based on the image noise residual have proven extremely effective for a number of forensic applications, like forgery detection and localization. Nonetheless, motivated by promising results in computer vision, the focus of the research community is now shifting on deep learning. In this paper we show that a class of residual-based descriptors can be actually regarded as a simple constrained convolutional neural network (CNN). Then, by relaxing the constraints, and fine-tuning the net on a relatively small training set, we obtain a significant performance improvement with respect to the conventional detector.",
"title": ""
},
{
"docid": "eae289c213d5b67d91bb0f461edae7af",
"text": "China has made remarkable progress in its war against poverty since the launching of economic reform in the late 1970s. This paper examines some of the major driving forces of poverty reduction in China. Based on time series and cross-sectional provincial data, the determinants of rural poverty incidence are estimated. The results show that economic growth is an essential and necessary condition for nationwide poverty reduction. It is not, however, a sufficient condition. While economic growth played a dominant role in reducing poverty through the mid-1990s, its impacts has diminished since that time. Beyond general economic growth, growth in specific sectors of the economy is also found to reduce poverty. For example, the growth the agricultural sector and other pro-rural (vs urban-biased) development efforts can also have significant impacts on rural poverty. Notwithstanding the record of the past, our paper is consistent with the idea that poverty reduction in the future will need to rely on more than broad-based growth and instead be dependent on pro-poor policy interventions (such as national poverty alleviation programs) that can be targeted at the poor, trying to directly help the poor to increase their human capital and incomes. Determinants of Rural Poverty Reduction and Pro-poor Economic Growth in China",
"title": ""
},
{
"docid": "0562b3b1692f07060cf4eeb500ea6cca",
"text": "As the volume of medicinal information stored electronically increase, so do the need to enhance how it is secured. The inaccessibility to patient record at the ideal time can prompt death toll and also well degrade the level of health care services rendered by the medicinal professionals. Criminal assaults in social insurance have expanded by 125% since 2010 and are now the leading cause of medical data breaches. This study therefore presents the combination of 3DES and LSB to improve security measure applied on medical data. Java programming language was used to develop a simulation program for the experiment. The result shows medical data can be stored, shared, and managed in a reliable and secure manner using the combined model. Keyword: Information Security; Health Care; 3DES; LSB; Cryptography; Steganography 1.0 INTRODUCTION In health industries, storing, sharing and management of patient information have been influenced by the current technology. That is, medical centres employ electronical means to support their mode of service in order to deliver quality health services. The importance of the patient record cannot be over emphasised as it contributes to when, where, how, and how lives can be saved. About 91% of health care organizations have encountered no less than one data breach, costing more than $2 million on average per organization [1-3]. Report also shows that, medical records attract high degree of importance to hoodlums compare to Mastercard information because they infer more cash base on the fact that bank",
"title": ""
},
{
"docid": "fcdde2f5b55b6d8133e6dea63d61b2c8",
"text": "It has been observed by many people that a striking number of quite diverse mathematical problems can be formulated as problems in integer programming, that is, linear programming problems in which some or all of the variables are required to assume integral values. This fact is rendered quite interesting by recent research on such problems, notably by R. E. Gomory [2, 3], which gives promise of yielding efficient computational techniques for their solution. The present paper provides yet another example of the versatility of integer programming as a mathematical modeling device by representing a generalization of the well-known “Travelling Salesman Problem” in integer programming terms. The authors have developed several such models, of which the one presented here is the most efficient in terms of generality, number of variables, and number of constraints. This model is due to the second author [4] and was presented briefly at the Symposium on Combinatorial Problems held at Princeton University, April 1960, sponsored by SIAM and IBM. The problem treated is: (1) A salesman is required to visit each of <italic>n</italic> cities, indexed by 1, ··· , <italic>n</italic>. He leaves from a “base city” indexed by 0, visits each of the <italic>n</italic> other cities exactly once, and returns to city 0. During his travels he must return to 0 exactly <italic>t</italic> times, including his final return (here <italic>t</italic> may be allowed to vary), and he must visit no more than <italic>p</italic> cities in one tour. (By a tour we mean a succession of visits to cities without stopping at city 0.) It is required to find such an itinerary which minimizes the total distance traveled by the salesman.\n Note that if <italic>t</italic> is fixed, then for the problem to have a solution we must have <italic>tp</italic> ≧ <italic>n</italic>. For <italic>t</italic> = 1, <italic>p</italic> ≧ <italic>n</italic>, we have the standard traveling salesman problem.\nLet <italic>d<subscrpt>ij</subscrpt></italic> (<italic>i</italic> ≠ <italic>j</italic> = 0, 1, ··· , <italic>n</italic>) be the distance covered in traveling from city <italic>i</italic> to city <italic>j</italic>. The following integer programming problem will be shown to be equivalent to (1): (2) Minimize the linear form ∑<subscrpt>0≦<italic>i</italic>≠<italic>j</italic>≦<italic>n</italic></subscrpt>∑ <italic>d<subscrpt>ij</subscrpt>x<subscrpt>ij</subscrpt></italic> over the set determined by the relations ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=0<italic>i</italic>≠<italic>j</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>j</italic> = 1, ··· , <italic>n</italic>) ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>j</italic>=0<italic>j</italic>≠<italic>i</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>i</italic> = 1, ··· , <italic>n</italic>) <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> + <italic>px<subscrpt>ij</subscrpt></italic> ≦ <italic>p</italic> - 1 (1 ≦ <italic>i</italic> ≠ <italic>j</italic> ≦ <italic>n</italic>) where the <italic>x<subscrpt>ij</subscrpt></italic> are non-negative integers and the <italic>u<subscrpt>i</subscrpt></italic> (<italic>i</italic> = 1, …, <italic>n</italic>) are arbitrary real numbers. (We shall see that it is permissible to restrict the <italic>u<subscrpt>i</subscrpt></italic> to be non-negative integers as well.)\n If <italic>t</italic> is fixed it is necessary to add the additional relation: ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>u</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> = <italic>t</italic> Note that the constraints require that <italic>x<subscrpt>ij</subscrpt></italic> = 0 or 1, so that a natural correspondence between these two problems exists if the <italic>x<subscrpt>ij</subscrpt></italic> are interpreted as follows: The salesman proceeds from city <italic>i</italic> to city <italic>j</italic> if and only if <italic>x<subscrpt>ij</subscrpt></italic> = 1. Under this correspondence the form to be minimized in (2) is the total distance to be traveled by the salesman in (1), so the burden of proof is to show that the two feasible sets correspond; i.e., a feasible solution to (2) has <italic>x<subscrpt>ij</subscrpt></italic> which do define a legitimate itinerary in (1), and, conversely a legitimate itinerary in (1) defines <italic>x<subscrpt>ij</subscrpt></italic>, which, together with appropriate <italic>u<subscrpt>i</subscrpt></italic>, satisfy the constraints of (2).\nConsider a feasible solution to (2).\n The number of returns to city 0 is given by ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt>. The constraints of the form ∑ <italic>x<subscrpt>ij</subscrpt></italic> = 1, all <italic>x<subscrpt>ij</subscrpt></italic> non-negative integers, represent the conditions that each city (other than zero) is visited exactly once. The <italic>u<subscrpt>i</subscrpt></italic> play a role similar to node potentials in a network and the inequalities involving them serve to eliminate tours that do not begin and end at city 0 and tours that visit more than <italic>p</italic> cities. Consider any <italic>x</italic><subscrpt><italic>r</italic><subscrpt>0</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> = 1 (<italic>r</italic><subscrpt>1</subscrpt> ≠ 0). There exists a unique <italic>r</italic><subscrpt>2</subscrpt> such that <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> = 1. Unless <italic>r</italic><subscrpt>2</subscrpt> = 0, there is a unique <italic>r</italic><subscrpt>3</subscrpt> with <italic>x</italic><subscrpt><italic>r</italic><subscrpt>2</subscrpt><italic>r</italic><subscrpt>3</subscrpt></subscrpt> = 1. We proceed in this fashion until some <italic>r<subscrpt>j</subscrpt></italic> = 0. This must happen since the alternative is that at some point we reach an <italic>r<subscrpt>k</subscrpt></italic> = <italic>r<subscrpt>j</subscrpt></italic>, <italic>j</italic> + 1 < <italic>k</italic>. \n Since none of the <italic>r</italic>'s are zero we have <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r<subscrpt>i</subscrpt></italic><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ - 1. Summing from <italic>i</italic> = <italic>j</italic> to <italic>k</italic> - 1, we have <italic>u<subscrpt>r<subscrpt>j</subscrpt></subscrpt></italic> - <italic>u<subscrpt>r<subscrpt>k</subscrpt></subscrpt></italic> = 0 ≦ <italic>j</italic> + 1 - <italic>k</italic>, which is a contradiction. Thus all tours include city 0. It remains to observe that no tours is of length greater than <italic>p</italic>. Suppose such a tour exists, <italic>x</italic><subscrpt>0<italic>r</italic><subscrpt>1</subscrpt></subscrpt> , <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> , ···· , <italic>x</italic><subscrpt><italic>r<subscrpt>p</subscrpt>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> = 1 with all <italic>r<subscrpt>i</subscrpt></italic> ≠ 0. Then, as before, <italic>u</italic><subscrpt><italic>r</italic>1</subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> ≦ - <italic>p</italic> or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≧ <italic>p</italic>.\n But we have <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> (1 - <italic>x</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt>) - 1 ≦ <italic>p</italic> - 1, which is a contradiction.\nConversely, if the <italic>x<subscrpt>ij</subscrpt></italic> correspond to a legitimate itinerary, it is clear that the <italic>u<subscrpt>i</subscrpt></italic> can be adjusted so that <italic>u<subscrpt>i</subscrpt></italic> = <italic>j</italic> if city <italic>i</italic> is the <italic>j</italic>th city visited in the tour which includes city <italic>i</italic>, for we then have <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> = - 1 if <italic>x<subscrpt>ij</subscrpt></italic> = 1, and always <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> ≦ <italic>p</italic> - 1.\n The above integer program involves <italic>n</italic><supscrpt>2</supscrpt> + <italic>n</italic> constraints (if <italic>t</italic> is not fixed) in <italic>n</italic><supscrpt>2</supscrpt> + 2<italic>n</italic> variables. Since the inequality form of constraint is fundamental for integer programming calculations, one may eliminate 2<italic>n</italic> variables, say the <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> and <italic>x</italic><subscrpt>0<italic>j</italic></subscrpt>, by means of the equation constraints and produce",
"title": ""
},
{
"docid": "05cea038adce7f5ae2a09a7fd5e024a7",
"text": "The paper describes the use TMS320C5402 DSP for single channel active noise cancellation (ANC) in duct system. The canceller uses a feedback control topology and is designed to cancel narrowband periodic tones. The signal is processed with well-known filtered-X least mean square (filtered-X LMS) Algorithm in the digital signal processing. The paper describes the hardware and use chip support libraries for data streaming. The FXLMS algorithm is written in assembly language callable from C main program. The results obtained are compatible to the expected result in the literature available. The paper highlights the features of cancellation and analyzes its performance at different gain and frequency.",
"title": ""
}
] | scidocsrr |
0122b9fb5f10ff47ba9f9a6d8b634b3b | Hierarchical Reinforcement Learning for Adaptive Text Generation | [
{
"docid": "8640cd629e07f8fa6764c387d9fa7c29",
"text": "We describe an evaluation of spoken dialogue strategies designed using hierarchical reinforcement learning agents. The dialogue strategies were learnt in a simulated environment and tested in a laboratory setting with 32 users. These dialogues were used to evaluate three types of machine dialogue behaviour: hand-coded, fully-learnt and semi-learnt. These experiments also served to evaluate the realism of simulated dialogues using two proposed metrics contrasted with ‘PrecisionRecall’. The learnt dialogue behaviours used the Semi-Markov Decision Process (SMDP) model, and we report the first evaluation of this model in a realistic conversational environment. Experimental results in the travel planning domain provide evidence to support the following claims: (a) hierarchical semi-learnt dialogue agents are a better alternative (with higher overall performance) than deterministic or fully-learnt behaviour; (b) spoken dialogue strategies learnt with highly coherent user behaviour and conservative recognition error rates (keyword error rate of 20%) can outperform a reasonable hand-coded strategy; and (c) hierarchical reinforcement learning dialogue agents are feasible and promising for the (semi) automatic design of optimized dialogue behaviours in larger-scale systems. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "85da95f8d04a8c394c320d2cce25a606",
"text": "Improved numerical weather prediction simulations have led weather services to examine how and where human forecasters add value to forecast production. The Forecast Production Assistant (FPA) was developed with that in mind. The authors discuss the Forecast Generator (FOG), the first application developed on the FPA. FOG is a bilingual report generator that produces routine and special purpose forecast directly from the FPA's graphical weather predictions. Using rules and a natural-language generator, FOG converts weather maps into forecast text. The natural-language issues involved are relevant to anyone designing a similar system.<<ETX>>",
"title": ""
},
{
"docid": "5b08a93afae9cf64b5300c586bfb3fdc",
"text": "Social interactions are characterized by distinct forms of interdependence, each of which has unique effects on how behavior unfolds within the interaction. Despite this, little is known about the psychological mechanisms that allow people to detect and respond to the nature of interdependence in any given interaction. We propose that interdependence theory provides clues regarding the structure of interdependence in the human ancestral past. In turn, evolutionary psychology offers a framework for understanding the types of information processing mechanisms that could have been shaped under these recurring conditions. We synthesize and extend these two perspectives to introduce a new theory: functional interdependence theory (FIT). FIT can generate testable hypotheses about the function and structure of the psychological mechanisms for inferring interdependence. This new perspective offers insight into how people initiate and maintain cooperative relationships, select social partners and allies, and identify opportunities to signal social motives.",
"title": ""
},
{
"docid": "01b2c742693e24e431b1bb231ae8a135",
"text": "Over the years, software development failures is really a burning issue, might be ascribed to quite a number of attributes, of which, no-compliance of users requirements and using the non suitable technique to elicit user requirements are considered foremost. In order to address this issue and to facilitate system designers, this study had filtered and compared user requirements elicitation technique, based on principles of requirements engineering. This comparative study facilitates developers to build systems based on success stories, making use of a optimistic perspective for achieving a foreseeable future. This paper is aimed at enhancing processes of choosing a suitable technique to elicit user requirements; this is crucial to determine the requirements of the user, as it enables much better software development and does not waste resources unnecessarily. Basically, this study will complement the present approaches, by representing a optimistic and potential factor for every single method in requirements engineering, which results in much better user needs, and identifies novel and distinctive specifications. Keywords— Requirements Engineering, Requirements Elicitation Techniques, Conversational methods, Observational methods, Analytic methods, Synthetic methods.",
"title": ""
},
{
"docid": "c495fadfd4c3e17948e71591e84c3398",
"text": "A real-time, digital algorithm for pulse width modulation (PWM) with distortion-free baseband is developed in this paper. The algorithm not only eliminates the intrinsic baseband distortion of digital PWM but also avoids the appearance of side-band components of the carrier in the baseband even for low switching frequencies. Previous attempts to implement digital PWM with these spectral properties required several processors due to their complexity; the proposed algorithm uses only several FIR filters and a few multiplications and additions and therefore is implemented in real time on a standard DSP. The performance of the algorithm is compared with that of uniform, double-edge PWM modulator via experimental measurements for several bandlimited modulating signals.",
"title": ""
},
{
"docid": "aec5c475caa7f2e0490c871882e94363",
"text": "The use of prognostic methods in maintenance in order to predict remaining useful life is receiving more attention over the past years. The use of these techniques in maintenance decision making and optimization in multi-component systems is however a still underexplored area. The objective of this paper is to optimally plan maintenance for a multi-component system based on prognostic/predictive information while considering different component dependencies (i.e. economic, structural and stochastic dependence). Consequently, this paper presents a dynamic predictive maintenance policy for multi-component systems that minimizes the long-term mean maintenance cost per unit time. The proposed maintenance policy is a dynamic method as the maintenance schedule is updated when new information on the degradation and remaining useful life of components becomes available. The performance, regarding the objective of minimal long-term mean cost per unit time, of the developed dynamic predictive maintenance policy is compared to five other conventional maintenance policies, these are: block-based maintenance, age-based maintenance, age-based maintenance with grouping, inspection condition-based maintenance and continuous condition-based maintenance. The ability of the predictive maintenance policy to react to changing component deterioration and dependencies within a multi-component system is quantified and the results show significant cost",
"title": ""
},
{
"docid": "4e71be70e5c8c081c5ff60f8b6cb5449",
"text": "Spin-transfer torque magnetic random access memory (STT-MRAM) is considered as one of the most promising candidates to build up a true universal memory thanks to its fast write/read speed, infinite endurance, and nonvolatility. However, the conventional access architecture based on 1 transistor + 1 memory cell limits its storage density as the selection transistor should be large enough to ensure the write current higher than the critical current for the STT operation. This paper describes a design of cross-point architecture for STT-MRAM. The mean area per word corresponds to only two transistors, which are shared by a number of bits (e.g., 64). This leads to significant improvement of data density (e.g., 1.75 F2/bit). Special techniques are also presented to address the sneak currents and low-speed issues of conventional cross-point architecture, which are difficult to surmount and few efficient design solutions have been reported in the literature. By using an STT-MRAM SPICE model including precise experimental parameters and STMicroelectronics 65 nm technology, some chip characteristic results such as cell area, data access speed, and power have been calculated or simulated to demonstrate the expected performances of this new memory architecture.",
"title": ""
},
{
"docid": "2b109799a55bcb1c0592c02b60478975",
"text": "Zero-shot learning (ZSL) is to construct recognition models for unseen target classes that have no labeled samples for training. It utilizes the class attributes or semantic vectors as side information and transfers supervision information from related source classes with abundant labeled samples. Existing ZSL approaches adopt an intermediary embedding space to measure the similarity between a sample and the attributes of a target class to perform zero-shot classification. However, this way may suffer from the information loss caused by the embedding process and the similarity measure cannot fully make use of the data distribution. In this paper, we propose a novel approach which turns the ZSL problem into a conventional supervised learning problem by synthesizing samples for the unseen classes. Firstly, the probability distribution of an unseen class is estimated by using the knowledge from seen classes and the class attributes. Secondly, the samples are synthesized based on the distribution for the unseen class. Finally, we can train any supervised classifiers based on the synthesized samples. Extensive experiments on benchmarks demonstrate the superiority of the proposed approach to the state-of-the-art ZSL approaches.",
"title": ""
},
{
"docid": "bc43482b0299fc339cf13df6e9288410",
"text": "Acute auricular hematoma is common after blunt trauma to the side of the head. A network of vessels provides a rich blood supply to the ear, and the ear cartilage receives its nutrients from the overlying perichondrium. Prompt management of hematoma includes drainage and prevention of reaccumulation. If left untreated, an auricular hematoma can result in complications such as perichondritis, infection, and necrosis. Cauliflower ear may result from long-standing loss of blood supply to the ear cartilage and formation of neocartilage from disrupted perichondrium. Management of cauliflower ear involves excision of deformed cartilage and reshaping of the auricle.",
"title": ""
},
{
"docid": "e1c04d30c7b8f71d9c9b19cb2bb36a33",
"text": "This Guide has been written to provide guidance for individuals involved in curriculum design who wish to develop research skills and foster the attributes in medical undergraduates that help develop research. The Guide will provoke debate on an important subject, and although written specifically with undergraduate medical education in mind, we hope that it will be of interest to all those involved with other health professionals' education. Initially, the Guide describes why research skills and its related attributes are important to those pursuing a medical career. It also explores the reasons why research skills and an ethos of research should be instilled into professionals of the future. The Guide also tries to define what these skills and attributes should be for medical students and lays out the case for providing opportunities to develop research expertise in the undergraduate curriculum. Potential methods to encourage the development of research-related attributes are explored as are some suggestions as to how research skills could be taught and assessed within already busy curricula. This publication also discusses the real and potential barriers to developing research skills in undergraduate students, and suggests strategies to overcome or circumvent these. Whilst we anticipate that this Guide will appeal to all levels of expertise in terms of student research, we hope that, through the use of case studies, we will provide practical advice to those currently developing this area within their curriculum.",
"title": ""
},
{
"docid": "66d5101d55595754add37e9e50952056",
"text": "The cognitive neural prosthetic (CNP) is a very versatile method for assisting paralyzed patients and patients with amputations. The CNP records the cognitive state of the subject, rather than signals strictly related to motor execution or sensation. We review a number of high-level cortical signals and their application for CNPs, including intention, motor imagery, decision making, forward estimation, executive function, attention, learning, and multi-effector movement planning. CNPs are defined by the cognitive function they extract, not the cortical region from which the signals are recorded. However, some cortical areas may be better than others for particular applications. Signals can also be extracted in parallel from multiple cortical areas using multiple implants, which in many circumstances can increase the range of applications of CNPs. The CNP approach relies on scientific understanding of the neural processes involved in cognition, and many of the decoding algorithms it uses also have parallels to underlying neural circuit functions. 169 A nn u. R ev . P sy ch ol . 2 01 0. 61 :1 69 -1 90 . D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by C al if or ni a In st itu te o f T ec hn ol og y on 0 1/ 03 /1 0. F or p er so na l u se o nl y. ANRV398-PS61-07 ARI 17 November 2009 19:51 Cognitive neural prosthetics (CNPs): instruments that consist of an array of electrodes, a decoding algorithm, and an external device controlled by the processed cognitive signal Decoding algorithms: computer algorithms that interpret neural signals for the purposes of understanding their function or for providing control signals to machines",
"title": ""
},
{
"docid": "a8e72235f2ec230a1be162fa6129db5e",
"text": "Lateral inhibition in top-down feedback is widely existing in visual neurobiology, but such an important mechanism has not be well explored yet in computer vision. In our recent research, we find that modeling lateral inhibition in convolutional neural network (LICNN) is very useful for visual attention and saliency detection. In this paper, we propose to formulate lateral inhibition inspired by the related studies from neurobiology, and embed it into the top-down gradient computation of a general CNN for classification, i.e. only category-level information is used. After this operation (only conducted once), the network has the ability to generate accurate category-specific attention maps. Further, we apply LICNN for weakly-supervised salient object detection. Extensive experimental studies on a set of databases, e.g., ECSSD, HKU-IS, PASCAL-S and DUT-OMRON, demonstrate the great advantage of LICNN which achieves the state-ofthe-art performance. It is especially impressive that LICNN with only category-level supervised information even outperforms some recent methods with segmentation-level super-",
"title": ""
},
{
"docid": "5c394c460f01c451e2ede526f73426ee",
"text": "Renal transplant recipients are at increased risk of bladder carcinoma. The aetiology is unknown but a polyoma virus (PV), BK virus (BKV), may play a role; urinary reactivation of this virus is common post-renal transplantation and PV large T-antigen (T-Ag) has transforming activity. In this study, we investigate the potential role of BKV in post-transplant urothelial carcinoma by immunostaining tumour tissue for PV T-Ag. There was no positivity for PV T-Ag in urothelial carcinomas from 20 non-transplant patients. Since 1990, 10 transplant recipients in our unit have developed urothelial carcinoma, and tumour tissue was available in eight recipients. Two patients were transplanted since the first case of PV nephropathy (PVN) was diagnosed in our unit in 2000 and both showed PV reactivation post-transplantation. In one of these patients, there was strong nuclear staining for PV T-Ag in tumour cells, with no staining of non-neoplastic urothelium. We conclude that PV infection is not associated with urothelial carcinoma in non-transplant patients, and is uncommon in transplant-associated tumours. Its presence in all tumour cells in one patient transplanted in the PVN era might suggest a possible role in tumorigenesis in that case.",
"title": ""
},
{
"docid": "186f2950bd4ce621eb0696c2fd09a468",
"text": "In this paper, I investigate the use of a disentangled VAE for downstream image classification tasks. I train a disentangled VAE in an unsupervised manner, and use the learned encoder as a feature extractor on top of which a linear classifier is learned. The models are trained and evaluated on the MNIST handwritten digits dataset. Experiments compared the disentangled VAE with both a standard (entangled) VAE and a vanilla supervised model. Results show that the disentangled VAE significantly outperforms the other two models when the proportion of labelled data is artificially reduced, while it loses this advantage when the amount of labelled data increases, and instead matches the performance of the other models. These results suggest that the disentangled VAE may be useful in situations where labelled data is scarce but unlabelled data is abundant.",
"title": ""
},
{
"docid": "538047fc099d0062ab100343b26f5cb7",
"text": "AIM\nTo examine the evidence on the association between cannabis and depression and evaluate competing explanations of the association.\n\n\nMETHODS\nA search of Medline, Psychinfo and EMBASE databases was conducted. All references in which the terms 'cannabis', 'marijuana' or 'cannabinoid', and in which the words 'depression/depressive disorder/depressed', 'mood', 'mood disorder' or 'dysthymia' were collected. Only research studies were reviewed. Case reports are not discussed.\n\n\nRESULTS\nThere was a modest association between heavy or problematic cannabis use and depression in cohort studies and well-designed cross-sectional studies in the general population. Little evidence was found for an association between depression and infrequent cannabis use. A number of studies found a modest association between early-onset, regular cannabis use and later depression, which persisted after controlling for potential confounding variables. There was little evidence of an increased risk of later cannabis use among people with depression and hence little support for the self-medication hypothesis. There have been a limited number of studies that have controlled for potential confounding variables in the association between heavy cannabis use and depression. These have found that the risk is much reduced by statistical control but a modest relationship remains.\n\n\nCONCLUSIONS\nHeavy cannabis use and depression are associated and evidence from longitudinal studies suggests that heavy cannabis use may increase depressive symptoms among some users. It is still too early, however, to rule out the hypothesis that the association is due to common social, family and contextual factors that increase risks of both heavy cannabis use and depression. Longitudinal studies and studies of twins discordant for heavy cannabis use and depression are needed to rule out common causes. If the relationship is causal, then on current patterns of cannabis use in the most developed societies cannabis use makes, at most, a modest contribution to the population prevalence of depression.",
"title": ""
},
{
"docid": "3b78223f5d11a56dc89a472daf23ca49",
"text": "Shadow maps provide a fast and convenient method of identifying shadows in scenes but can introduce aliasing. This paper introduces the Adaptive Shadow Map (ASM) as a solution to this problem. An ASM removes aliasing by resolving pixel size mismatches between the eye view and the light source view. It achieves this goal by storing the light source view (i.e., the shadow map for the light source) as a hierarchical grid structure as opposed to the conventional flat structure. As pixels are transformed from the eye view to the light source view, the ASM is refined to create higher-resolution pieces of the shadow map when needed. This is done by evaluating the contributions of shadow map pixels to the overall image quality. The improvement process is view-driven, progressive, and confined to a user-specifiable memory footprint. We show that ASMs enable dramatic improvements in shadow quality while maintaining interactive rates.",
"title": ""
},
{
"docid": "0e5a11ef4daeb969702e40ea0c50d7f3",
"text": "OBJECTIVES\nThe aim of this study was to assess the long-term safety and efficacy of the CYPHER (Cordis, Johnson and Johnson, Bridgewater, New Jersey) sirolimus-eluting coronary stent (SES) in percutaneous coronary intervention (PCI) for ST-segment elevation myocardial infarction (STEMI).\n\n\nBACKGROUND\nConcern over the safety of drug-eluting stents implanted during PCI for STEMI remains, and long-term follow-up from randomized trials are necessary. TYPHOON (Trial to assess the use of the cYPHer sirolimus-eluting stent in acute myocardial infarction treated with ballOON angioplasty) randomized 712 patients with STEMI treated by primary PCI to receive either SES (n = 355) or bare-metal stents (BMS) (n = 357). The primary end point, target vessel failure at 1 year, was significantly lower in the SES group than in the BMS group (7.3% vs. 14.3%, p = 0.004) with no increase in adverse events.\n\n\nMETHODS\nA 4-year follow-up was performed. Complete data were available in 501 patients (70%), and the survival status is known in 580 patients (81%).\n\n\nRESULTS\nFreedom from target lesion revascularization (TLR) at 4 years was significantly better in the SES group (92.4% vs. 85.1%; p = 0.002); there were no significant differences in freedom from cardiac death (97.6% and 95.9%; p = 0.37) or freedom from repeat myocardial infarction (94.8% and 95.6%; p = 0.85) between the SES and BMS groups. No difference in definite/probable stent thrombosis was noted at 4 years (SES: 4.4%, BMS: 4.8%, p = 0.83). In the 580 patients with known survival status at 4 years, the all-cause death rate was 5.8% in the SES and 7.0% in the BMS group (p = 0.61).\n\n\nCONCLUSIONS\nIn the 70% of patients with complete follow-up at 4 years, SES demonstrated sustained efficacy to reduce TLR with no difference in death, repeat myocardial infarction or stent thrombosis. (The Study to Assess AMI Treated With Balloon Angioplasty [TYPHOON]; NCT00232830).",
"title": ""
},
{
"docid": "77bbd6d3e1f1ae64bda32cd057cf0580",
"text": "Although great progress has been made in automatic speech recognition, significant performance degradation still exists in noisy environments. Recently, very deep convolutional neural networks CNNs have been successfully applied to computer vision and speech recognition tasks. Based on our previous work on very deep CNNs, in this paper this architecture is further developed to improve recognition accuracy for noise robust speech recognition. In the proposed very deep CNN architecture, we study the best configuration for the sizes of filters, pooling, and input feature maps: the sizes of filters and poolings are reduced and dimensions of input features are extended to allow for adding more convolutional layers. Then the appropriate pooling, padding, and input feature map selection strategies are investigated and applied to the very deep CNN to make it more robust for speech recognition. In addition, an in-depth analysis of the architecture reveals key characteristics, such as compact model scale, fast convergence speed, and noise robustness. The proposed new model is evaluated on two tasks: Aurora4 task with multiple additive noise types and channel mismatch, and the AMI meeting transcription task with significant reverberation. Experiments on both tasks show that the proposed very deep CNNs can significantly reduce word error rate WER for noise robust speech recognition. The best architecture obtains a 10.0% relative reduction over the traditional CNN on AMI, competitive with the long short-term memory recurrent neural networks LSTM-RNN acoustic model. On Aurora4, even without feature enhancement, model adaptation, and sequence training, it achieves a WER of 8.81%, a 17.0% relative improvement over the LSTM-RNN. To our knowledge, this is the best published result on Aurora4.",
"title": ""
},
{
"docid": "8a9680ae0d35a1c53773ccf7dcef4df7",
"text": "Support Vector Machines SVMs have proven to be highly e ective for learning many real world datasets but have failed to establish them selves as common machine learning tools This is partly due to the fact that they are not easy to implement and their standard imple mentation requires the use of optimization packages In this paper we present simple iterative algorithms for training support vector ma chines which are easy to implement and guaranteed to converge to the optimal solution Furthermore we provide a technique for automati cally nding the kernel parameter and best learning rate Extensive experiments with real datasets are provided showing that these al gorithms compare well with standard implementations of SVMs in terms of generalisation accuracy and computational cost while being signi cantly simpler to implement",
"title": ""
},
{
"docid": "2d82220d88794093209aa4b8151e70d9",
"text": "Iterative Hard Thresholding (IHT) is a class of projected gradient descent methods for optimizing sparsity-constrained minimization models, with the best known efficiency and scalability in practice. As far as we know, the existing IHT-style methods are designed for sparse minimization in primal form. It remains open to explore duality theory and algorithms in such a non-convex and NP-hard problem setting. In this paper, we bridge this gap by establishing a duality theory for sparsity-constrained minimization with `2-regularized loss function and proposing an IHT-style algorithm for dual maximization. Our sparse duality theory provides a set of sufficient and necessary conditions under which the original NP-hard/non-convex problem can be equivalently solved in a dual formulation. The proposed dual IHT algorithm is a super-gradient method for maximizing the non-smooth dual objective. An interesting finding is that the sparse recovery performance of dual IHT is invariant to the Restricted Isometry Property (RIP), which is required by virtually all the existing primal IHT algorithms without sparsity relaxation. Moreover, a stochastic variant of dual IHT is proposed for large-scale stochastic optimization. Numerical results demonstrate the superiority of dual IHT algorithms to the state-of-the-art primal IHT-style algorithms in model estimation accuracy and computational efficiency.",
"title": ""
},
{
"docid": "225ac2816e26f156b16ad65401fcbaf6",
"text": "This paper investigates how internet users’ perception of control over their personal information affects how likely they are to click on online advertising on a social networking website. The paper uses data from a randomized field experiment that examined the effectiveness of personalizing ad text with user-posted personal information relative to generic text. The website gave users more control over their personally identifiable information in the middle of the field test. However, the website did not change how advertisers used data to target and personalize ads. Before the policy change, personalized ads did not perform particularly well. However, after this enhancement of perceived control over privacy, users were nearly twice as likely to click on personalized ads. Ads that targeted but did not use personalized text remained unchanged in effectiveness. The increase in effectiveness was larger for ads that used more unique private information to personalize their message and for target groups who were more likely to use opt-out privacy settings.",
"title": ""
}
] | scidocsrr |
fb1dac0bee58d622f78bb84c1f832af7 | Association between online social networking and depression in high school students: behavioral physiology viewpoint. | [
{
"docid": "89c9ad792245fc7f7e7e3b00c1e8147a",
"text": "Contrasting hypotheses were posed to test the effect of Facebook exposure on self-esteem. Objective Self-Awareness (OSA) from social psychology and the Hyperpersonal Model from computer-mediated communication were used to argue that Facebook would either diminish or enhance self-esteem respectively. The results revealed that, in contrast to previous work on OSA, becoming self-aware by viewing one's own Facebook profile enhances self-esteem rather than diminishes it. Participants that updated their profiles and viewed their own profiles during the experiment also reported greater self-esteem, which lends additional support to the Hyperpersonal Model. These findings suggest that selective self-presentation in digital media, which leads to intensified relationship formation, also influences impressions of the self.",
"title": ""
}
] | [
{
"docid": "20a90ed3aa2b428b19e85aceddadce90",
"text": "Deep learning has been a groundbreaking technology in various fields as well as in communications systems. In spite of the notable advancements of deep neural network (DNN) based technologies in recent years, the high computational complexity has been a major obstacle to apply DNN in practical communications systems which require real-time operation. In this sense, challenges regarding practical implementation must be addressed before the proliferation of DNN-based intelligent communications becomes a reality. To the best of the authors’ knowledge, for the first time, this article presents an efficient learning architecture and design strategies including link level verification through digital circuit implementations using hardware description language (HDL) to mitigate this challenge and to deduce feasibility and potential of DNN for communications systems. In particular, DNN is applied for an encoder and a decoder to enable flexible adaptation with respect to the system environments without needing any domain specific information. Extensive investigations and interdisciplinary design considerations including the DNN-based autoencoder structure, learning framework, and low-complexity digital circuit implementations for real-time operation are taken into account by the authors which ascertains the use of DNN-based communications in practice.",
"title": ""
},
{
"docid": "f82ce890d66c746a169a38fdad702749",
"text": "The following review paper presents an overview of the current crop yield forecasting methods and early warning systems for the global strategy to improve agricultural and rural statistics across the globe. Different sections describing simulation models, remote sensing, yield gap analysis, and methods to yield forecasting compose the manuscript. 1. Rationale Sustainable land management for crop production is a hierarchy of systems operating in— and interacting with—economic, ecological, social, and political components of the Earth. This hierarchy ranges from a field managed by a single farmer to regional, national, and global scales where policies and decisions influence crop production, resource use, economics, and ecosystems at other levels. Because sustainability concepts must integrate these diverse issues, agricultural researchers who wish to develop sustainable productive systems and policy makers who attempt to influence agricultural production are confronted with many challenges. A multiplicity of problems can prevent production systems from being sustainable; on the other hand, with sufficient attention to indicators of sustainability, a number of practices and policies could be implemented to accelerate progress. Indicators to quantify changes in crop production systems over time at different hierarchical levels are needed for evaluating the sustainability of different land management strategies. To develop and test sustainability concepts and yield forecast methods globally, it requires the implementation of long-term crop and soil management experiments that include measurements of crop yields, soil properties, biogeochemical fluxes, and relevant socioeconomic indicators. Long-term field experiments cannot be conducted with sufficient detail in space and time to find the best land management practices suitable for sustainable crop production. Crop and soil simulation models, when suitably tested in reasonably diverse space and time, provide a critical tool for finding combinations of management strategies to reach multiple goals required for sustainable crop production. The models can help provide land managers and policy makers with a tool to extrapolate experimental results from one location to others where there is a lack of response information. Agricultural production is significantly affected by environmental factors. Weather influences crop growth and development, causing large intra-seasonal yield variability. In addition, spatial variability of soil properties, interacting with the weather, cause spatial yield variability. Crop agronomic management (e.g. planting, fertilizer application, irrigation, tillage, and so on) can be used to offset the loss in yield due to effects of weather. As a result, yield forecasting represents an important tool for optimizing crop yield and to evaluate the crop-area insurance …",
"title": ""
},
{
"docid": "f6deeee48e0c8f1ed1d922093080d702",
"text": "Foreword: The ACM SIGCHI (Association for Computing Machinery Special Interest Group in Computer Human Interaction) community conducted a deliberative process involving a high-visibility committee, a day-long workshop at CHI99 (Pittsburgh, PA, May 15, 1999) and a collaborative authoring process. This interim report is offered to produce further discussion and input leading to endorsement by the SIGCHI Executive Committee and then other professional societies. The scope of this research agenda included advanced information and communications technology research that could yield benefits over the next two to five years.",
"title": ""
},
{
"docid": "001104ca832b10553b28bbd713e6cbd5",
"text": "In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.",
"title": ""
},
{
"docid": "e82cd7c22668b0c9ed62b4afdf49d1f4",
"text": "This paper presents a tutorial on delta-sigma fractional-N PLLs for frequency synthesis. The presentation assumes the reader has a working knowledge of integer-N PLLs. It builds on this knowledge by introducing the additional concepts required to understand ΔΣ fractional-N PLLs. After explaining the limitations of integerN PLLs with respect to tuning resolution, the paper introduces the delta-sigma fractional-N PLL as a means of avoiding these limitations. It then presents a selfcontained explanation of the relevant aspects of deltasigma modulation, an extension of the well known integerN PLL linearized model to delta-sigma fractional-N PLLs, a design example, and techniques for wideband digital modulation of the VCO within a delta-sigma fractional-N PLL.",
"title": ""
},
{
"docid": "e095b0d96a6c0dcc87efbbc3e730b581",
"text": "In this paper, we present ObSteiner, an exact algorithm for the construction of obstacle-avoiding rectilinear Steiner minimum trees (OARSMTs) among complex rectilinear obstacles. This is the first paper to propose a geometric approach to optimally solve the OARSMT problem among complex obstacles. The optimal solution is constructed by the concatenation of full Steiner trees among complex obstacles, which are proven to be of simple structures in this paper. ObSteiner is able to handle complex obstacles, including both convex and concave ones. Benchmarks with hundreds of terminals among a large number of obstacles are solved optimally in a reasonable amount of time.",
"title": ""
},
{
"docid": "c05b6720cdfdf6170ccce6486d485dc0",
"text": "The naturalness of warps is gaining extensive attention in image stitching. Recent warps, such as SPHP and AANAP, use global similarity warps to mitigate projective distortion (which enlarges regions); however, they necessarily bring in perspective distortion (which generates inconsistencies). In this paper, we propose a novel quasi-homography warp, which effectively balances the perspective distortion against the projective distortion in the non-overlapping region to create a more natural-looking panorama. Our approach formulates the warp as the solution of a bivariate system, where perspective distortion and projective distortion are characterized as slope preservation and scale linearization, respectively. Because our proposed warp only relies on a global homography, it is thus totally parameter free. A comprehensive experiment shows that a quasi-homography warp outperforms some state-of-the-art warps in urban scenes, including homography, AutoStitch and SPHP. A user study demonstrates that it wins most users’ favor, compared to homography and SPHP.",
"title": ""
},
{
"docid": "046245929e709ef2935c9413619ab3d7",
"text": "In recent years, there has been a growing intensity of competition in virtually all areas of business in both markets upstream for raw materials such as components, supplies, capital and technology and markets downstream for consumer goods and services. This paper examines the relationships among generic strategy, competitive advantage, and organizational performance. Firstly, the nature of generic strategies, competitive advantage, and organizational performance is examined. Secondly, the relationship between generic strategies and competitive advantage is analyzed. Finally, the implications of generic strategies, organizational performance, performance measures and competitive advantage are studied. This study focuses on: (i) the relationship of generic strategy and organisational performance in Australian manufacturing companies participating in the “Best Practice Program in Australia”, (ii) the relationship between generic strategies and competitive advantage, and (iii) the relationship among generic strategies, competitive advantage and organisational performance. 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4d1eae0f247f1c2db9e3c544a65c041f",
"text": "This papers presents a new system using circular markers to estimate the pose of a camera. Contrary to most markersbased systems using square markers, we advocate the use of circular markers, as we believe that they are easier to detect and provide a pose estimate that is more robust to noise. Unlike existing systems using circular markers, our method computes the exact pose from one single circular marker, and do not need specific points being explicitly shown on the marker (like center, or axes orientation). Indeed, the center and orientation is encoded directly in the marker’s code. We can thus use the entire marker surface for the code design. After solving the back projection problem for one conic correspondence, we end up with two possible poses. We show how to find the marker’s code, rotation and final pose in one single step, by using a pyramidal cross-correlation optimizer. The marker tracker runs at 100 frames/second on a desktop PC and 30 frames/second on a hand-held UMPC.",
"title": ""
},
{
"docid": "38e6384522c9e3e961819ed5b00a7697",
"text": "Cloud gaming has been recognized as a promising shift in the online game industry, with the aim of implementing the “on demand” service concept that has achieved market success in other areas of digital entertainment such as movies and TV shows. The concepts of cloud computing are leveraged to render the game scene as a video stream that is then delivered to players in real-time. The main advantage of this approach is the capability of delivering high-quality graphics games to any type of end user device; however, at the cost of high bandwidth consumption and strict latency requirements. A key challenge faced by cloud game providers lies in configuring the video encoding parameters so as to maximize player Quality of Experience (QoE) while meeting bandwidth availability constraints. In this article, we tackle one aspect of this problem by addressing the following research question: Is it possible to improve service adaptation based on information about the characteristics of the game being streamed? To answer this question, two main challenges need to be addressed: the need for different QoE-driven video encoding (re-)configuration strategies for different categories of games, and how to determine a relevant game categorization to be used for assigning appropriate configuration strategies. We investigate these problems by conducting two subjective laboratory studies with a total of 80 players and three different games. Results indicate that different strategies should likely be applied for different types of games, and show that existing game classifications are not necessarily suitable for differentiating game types in this context. We thus further analyze objective video metrics of collected game play video traces as well as player actions per minute and use this as input data for clustering of games into two clusters. Subjective results verify that different video encoding configuration strategies may be applied to games belonging to different clusters.",
"title": ""
},
{
"docid": "93ea7c59bad8181b0379f39e00f4d2e8",
"text": "Breadth-First Search (BFS) is a key graph algorithm with many important applications. In this work, we focus on a special class of graph traversal algorithm - concurrent BFS - where multiple breadth-first traversals are performed simultaneously on the same graph. We have designed and developed a new approach called iBFS that is able to run i concurrent BFSes from i distinct source vertices, very efficiently on Graphics Processing Units (GPUs). iBFS consists of three novel designs. First, iBFS develops a single GPU kernel for joint traversal of concurrent BFS to take advantage of shared frontiers across different instances. Second, outdegree-based GroupBy rules enables iBFS to selectively run a group of BFS instances which further maximizes the frontier sharing within such a group. Third, iBFS brings additional performance benefit by utilizing highly optimized bitwise operations on GPUs, which allows a single GPU thread to inspect a vertex for concurrent BFS instances. The evaluation on a wide spectrum of graph benchmarks shows that iBFS on one GPU runs up to 30x faster than executing BFS instances sequentially, and on 112 GPUs achieves near linear speedup with the maximum performance of 57,267 billion traversed edges per second (TEPS).",
"title": ""
},
{
"docid": "a0566ac90d164db763c7efa977d4bc0d",
"text": "Dead-time controls for synchronous buck converter are challenging due to the difficulties in accurate sensing and processing the on/off dead-time errors. For the control of dead-times, an integral feedback control using switched capacitors and a fast timing sensing circuit composed of MOSFET differential amplifiers and switched current sources are proposed. Experiments for a 3.3 V input, 1.5 V-0.3 A output converter demonstrated 1.3 ~ 4.6% efficiency improvement over a wide load current range.",
"title": ""
},
{
"docid": "ce5ede79daee56d50f5b086ad8f18a28",
"text": "In order to improve the efficiency and classification ability of Support vector machines (SVM) based on stochastic gradient descent algorithm, three algorithms of improved stochastic gradient descent (SGD) are used to solve support vector machine, which are Momentum, Nesterov accelerated gradient (NAG), RMSprop. The experimental results show that the algorithm based on RMSprop for solving the linear support vector machine has faster convergence speed and higher testing precision on five datasets (Alpha, Gamma, Delta, Mnist, Usps).",
"title": ""
},
{
"docid": "dd732081865bb209276acd3bb76ee08f",
"text": "A 57-64-GHz low phase-error 5-bit switch-type phase shifter integrated with a low phase-variation variable gain amplifier (VGA) is implemented through TSMC 90-nm CMOS low-power technology. Using the phase compensation technique, the proposed VGA can provide appropriate gain tuning with almost constant phase characteristics, thus greatly reducing the phase-tuning complexity in a phased-array system. The measured root mean square (rms) phase error of the 5-bit phase shifter is 2° at 62 GHz. The phase shifter has a low group-delay deviation (phase distortion) of +/- 8.5 ps and an excellent insertion loss flatness of ±0.8 dB for a specific phase-shifting state, across 57-64 GHz. For all 32 states, the insertion loss is 14.6 ± 3 dB, including pad loss at 60 GHz. For the integrated phase shifter and VGA, the VGA can provide 6.2-dB gain tuning range, which is wide enough to cover the loss variation of the phase shifter, with only 1.86° phase variation. The measured rms phase error of the 5-bit phase shifter and VGA is 3.8° at 63 GHz. The insertion loss of all 32 states is 5.4 dB, including pad loss at 60 GHz, and the loss flatness is ±0.8 dB over 57-64 GHz. To the best of our knowledge, the 5-bit phase shifter presents the best rms phase error at center frequency among the V-band switch-type phase shifter.",
"title": ""
},
{
"docid": "646a1e7c1a71dc89fa92d76a19c7389e",
"text": "As modern GPUs rely partly on their on-chip memories to counter the imminent off-chip memory wall, the efficient use of their caches has become important for performance and energy. However, optimising cache locality system-atically requires insight into and prediction of cache behaviour. On sequential processors, stack distance or reuse distance theory is a well-known means to model cache behaviour. However, it is not straightforward to apply this theory to GPUs, mainly because of the parallel execution model and fine-grained multi-threading. This work extends reuse distance to GPUs by modelling: (1) the GPU's hierarchy of threads, warps, threadblocks, and sets of active threads, (2) conditional and non-uniform latencies, (3) cache associativity, (4) miss-status holding-registers, and (5) warp divergence. We implement the model in C++ and extend the Ocelot GPU emulator to extract lists of memory addresses. We compare our model with measured cache miss rates for the Parboil and PolyBench/GPU benchmark suites, showing a mean absolute error of 6% and 8% for two cache configurations. We show that our model is faster and even more accurate compared to the GPGPU-Sim simulator.",
"title": ""
},
{
"docid": "ec772eccaa45eb860582820e751f3415",
"text": "Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.",
"title": ""
},
{
"docid": "db61ab44bfb0e7eddf2959121a79a2ee",
"text": "This paper analyzes the supply and demand for Bitcoinbased Ponzi schemes. There are a variety of these types of scams: from long cons such as Bitcoin Savings & Loans to overnight doubling schemes that do not take off. We investigate what makes some Ponzi schemes successful and others less so. By scouring 11 424 threads on bitcointalk. org, we identify 1 780 distinct scams. Of these, half lasted a week or less. Using survival analysis, we identify factors that affect scam persistence. One approach that appears to elongate the life of the scam is when the scammer interacts a lot with their victims, such as by posting more than a quarter of the comments in the related thread. By contrast, we also find that scams are shorter-lived when the scammers register their account on the same day that they post about their scam. Surprisingly, more daily posts by victims is associated with the scam ending sooner.",
"title": ""
},
{
"docid": "35a063ab339f32326547cc54bee334be",
"text": "We present a model for attacking various cryptographic schemes by taking advantage of random hardware faults. The model consists of a black-box containing some cryptographic secret. The box interacts with the outside world by following a cryptographic protocol. The model supposes that from time to time the box is affected by a random hardware fault causing it to output incorrect values. For example, the hardware fault flips an internal register bit at some point during the computation. We show that for many digital signature and identification schemes these incorrect outputs completely expose the secrets stored in the box. We present the following results: (1) The secret signing key used in an implementation of RSA based on the Chinese Remainder Theorem (CRT) is completely exposed from a single erroneous RSA signature, (2) for non-CRT implementations of RSA the secret key is exposed given a large number (e.g. 1000) of erroneous signatures, (3) the secret key used in Fiat—Shamir identification is exposed after a small number (e.g. 10) of faulty executions of the protocol, and (4) the secret key used in Schnorr's identification protocol is exposed after a much larger number (e.g. 10,000) of faulty executions. Our estimates for the number of necessary faults are based on standard security parameters such as a 1024-bit modulus, and a 2 -40 identification error probability. Our results demonstrate the importance of preventing errors in cryptographic computations. We conclude the paper with various methods for preventing these attacks.",
"title": ""
},
{
"docid": "6de71e8106d991d2c3d2b845a9e0a67e",
"text": "XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-commerce, there is a rapidly growing need for a mechanism to incorporate reactive functionality in an XML setting. Event-condition-action (ECA) rules are a technology from active databases and are a natural method for supporting suchfunctionality. ECA rules can be used for activities such as automatically enforcing document constraints, maintaining repository statistics, and facilitating publish/subscribe applications. An important question associated with the use of a ECA rules is how to statically predict their run-time behaviour. In this paper, we define a language for ECA rules on XML repositories. We then investigate methods for analysing the behaviour of a set of ECA rules, a task which has added complexity in this XML setting compared with conventional active databases.",
"title": ""
},
{
"docid": "0f3d520a6d09c136816a9e0493c45db1",
"text": "Specular reflection exists widely in photography and causes the recorded color deviating from its true value, thus, fast and high quality highlight removal from a single nature image is of great importance. In spite of the progress in the past decades in highlight removal, achieving wide applicability to the large diversity of nature scenes is quite challenging. To handle this problem, we propose an analytic solution to highlight removal based on an L2 chromaticity definition and corresponding dichromatic model. Specifically, this paper derives a normalized dichromatic model for the pixels with identical diffuse color: a unit circle equation of projection coefficients in two subspaces that are orthogonal to and parallel with the illumination, respectively. In the former illumination orthogonal subspace, which is specular-free, we can conduct robust clustering with an explicit criterion to determine the cluster number adaptively. In the latter, illumination parallel subspace, a property called pure diffuse pixels distribution rule helps map each specular-influenced pixel to its diffuse component. In terms of efficiency, the proposed approach involves few complex calculation, and thus can remove highlight from high resolution images fast. Experiments show that this method is of superior performance in various challenging cases.",
"title": ""
}
] | scidocsrr |
ca2584c9be2200d80892a7708347c83b | An Investigation of the Role of Dependency in Predicting continuance Intention to Use Ubiquitous Media Systems: Combining a Media system Perspective with Expectation-confirmation Theories | [
{
"docid": "e83e6284d3c9cf8fddf972a25d869a1b",
"text": "Internet-based learning systems are being used in many universities and firms but their adoption requires a solid understanding of the user acceptance processes. Our effort used an extended version of the technology acceptance model (TAM), including cognitive absorption, in a formal empirical study to explain the acceptance of such systems. It was intended to provide insight for improving the assessment of on-line learning systems and for enhancing the underlying system itself. The work involved the examination of the proposed model variables for Internet-based learning systems acceptance. Using an on-line learning system as the target technology, assessment of the psychometric properties of the scales proved acceptable and confirmatory factor analysis supported the proposed model structure. A partial-least-squares structural modeling approach was used to evaluate the explanatory power and causal links of the model. Overall, the results provided support for the model as explaining acceptance of an on-line learning system and for cognitive absorption as a variable that influences TAM variables. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "1b82e1fa8619480ba194c83c5370da5d",
"text": "This study presents an extended technology acceptance model (TAM) that integrates innovation diffusion theory, perceived risk and cost into the TAM to investigate what determines user mobile commerce (MC) acceptance. The proposed model was empirically tested using data collected from a survey of MC consumers. The structural equation modeling technique was used to evaluate the causal model and confirmatory factor analysis was performed to examine the reliability and validity of the measurement model. Our findings indicated that all variables except perceived ease of use significantly affected users’ behavioral intent. Among them, the compatibility had the most significant influence. Furthermore, a striking, and somewhat puzzling finding was the positive influence of perceived risk on behavioral intention to use. The implication of this work to both researchers and practitioners is discussed. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ca8aa3e930fd36a16ac36546a25a1fde",
"text": "Accurate State-of-Charge (SOC) estimation of Li-ion batteries is essential for effective battery control and energy management of electric and hybrid electric vehicles. To this end, first, the battery is modelled by an OCV-R-RC equivalent circuit. Then, a dual Bayesian estimation scheme is developed-The battery model parameters are identified online and fed to the SOC estimator, the output of which is then fed back to the parameter identifier. Both parameter identification and SOC estimation are treated in a Bayesian framework. The square-root recursive least-squares estimator and the extended Kalman-Bucy filter are systematically paired up for the first time in the battery management literature to tackle the SOC estimation problem. The proposed method is finally compared with the convectional Coulomb counting method. The results indicate that the proposed method significantly outperforms the Coulomb counting method in terms of accuracy and robustness.",
"title": ""
},
{
"docid": "e3de7dc210e780e1c460a505628ea4ed",
"text": "We present a machine learning technique for driving 3D facial animation by audio input in real time and with low latency. Our deep neural network learns a mapping from input waveforms to the 3D vertex coordinates of a face model, and simultaneously discovers a compact, latent code that disambiguates the variations in facial expression that cannot be explained by the audio alone. During inference, the latent code can be used as an intuitive control for the emotional state of the face puppet.\n We train our network with 3--5 minutes of high-quality animation data obtained using traditional, vision-based performance capture methods. Even though our primary goal is to model the speaking style of a single actor, our model yields reasonable results even when driven with audio from other speakers with different gender, accent, or language, as we demonstrate with a user study. The results are applicable to in-game dialogue, low-cost localization, virtual reality avatars, and telepresence.",
"title": ""
},
{
"docid": "1262ce9e36e4208a1d8e641e5078e083",
"text": "D its fundamental role in legitimizing the modern state system, nationalism has rarely been linked to the outbreak of political violence in the recent literature on ethnic conflict and civil war. to a large extent, this is because the state is absent from many conventional theories of ethnic conflict. indeed, some studies analyze conflict between ethnic groups under conditions of state failure, thus making the absence of the state the very core of the causal argument. others assume that the state is ethnically neutral and try to relate ethnodemographic measures, such as fractionalization and polarization, to civil war. in contrast to these approaches, we analyze the state as an institution that is captured to different degrees by representatives of particular ethnic communities, and thus we conceive of ethnic wars as the result of competing ethnonationalist claims to state power. While our work relates to a rich research tradition that links the causes of such conflicts to the mobilization of ethnic minorities, it also goes beyond this tradition by introducing a new data set that addresses some of the shortcomings of this tradition. our analysis is based on the Ethnic power relations data set (epr), which covers all politically relevant ethnic groups and their access to power around the world from 1946 through 2005. this data set improves significantly on the widely used minorities at risk data set, which restricts its sample to mobilized",
"title": ""
},
{
"docid": "2dd42cce112c61950b96754bb7b4df10",
"text": "Hierarchical methods have been widely explored for object recognition, which is a critical component of scene understanding. However, few existing works are able to model the contextual information (e.g., objects co-occurrence) explicitly within a single coherent framework for scene understanding. Towards this goal, in this paper we propose a novel three-level (superpixel level, object level and scene level) hierarchical model to address the scene categorization problem. Our proposed model is a coherent probabilistic graphical model that captures the object co-occurrence information for scene understanding with a probabilistic chain structure. The efficacy of the proposed model is demonstrated by conducting experiments on the LabelMe dataset.",
"title": ""
},
{
"docid": "385c7c16af40ae13b965938ac3bce34c",
"text": "The information age has brought a deluge of data. Much of this is in text form, insurmountable in scope for humans and incomprehensible in structure for computers. Text mining is an expanding field of research that seeks to utilize the information contained in vast document collections. General data mining methods based on machine learning face challenges with the scale of text data, posing a need for scalable text mining methods. This thesis proposes a solution to scalable text mining: generative models combined with sparse computation. A unifying formalization for generative text models is defined, bringing together research traditions that have used formally equivalent models, but ignored parallel developments. This framework allows the use of methods developed in different processing tasks such as retrieval and classification, yielding effective solutions across different text mining tasks. Sparse computation using inverted indices is proposed for inference on probabilistic models. This reduces the computational complexity of the common text mining operations according to sparsity, yielding probabilistic models with the scalability of modern search engines. The proposed combination provides sparse generative models: a solution for text mining that is general, effective, and scalable. Extensive experimentation on text classification and ranked retrieval datasets are conducted, showing that the proposed solution matches or outperforms the leading task-specific methods in effectiveness, with a order of magnitude decrease in classification times for Wikipedia article categorization with a million classes. The developed methods were further applied in two 2014 Kaggle data mining prize competitions with over a hundred competing teams, earning first and second places.",
"title": ""
},
{
"docid": "a1cd4a4ce70c9c8672eee5ffc085bf63",
"text": "Ternary logic is a promising alternative to conventional binary logic, since it is possible to achieve simplicity and energy efficiency due to the reduced circuit overhead. In this paper, a ternary magnitude comparator design based on Carbon Nanotube Field Effect Transistors (CNFETs) is presented. This design eliminates the usage of complex ternary decoder which is a part of existing designs. Elimination of decoder results in reduction of delay and power. Simulations of proposed and existing designs are done on HSPICE and results proves that the proposed 1-bit comparator consumes 81% less power and shows delay advantage of 41.6% compared to existing design. Further a methodology to extend the 1-bit comparator design to n-bit comparator design is also presented.",
"title": ""
},
{
"docid": "0c4f02b3b361d60da1aec0f0c100dcf9",
"text": "Architecture Compliance Checking (ACC) is an approach to verify the conformance of implemented program code to high-level models of architectural design. ACC is used to prevent architectural erosion during the development and evolution of a software system. Static ACC, based on static software analysis techniques, focuses on the modular architecture and especially on rules constraining the modular elements. A semantically rich modular architecture (SRMA) is expressive and may contain modules with different semantics, like layers and subsystems, constrained by rules of different types. To check the conformance to an SRMA, ACC-tools should support the module and rule types used by the architect. This paper presents requirements regarding SRMA support and an inventory of common module and rule types, on which basis eight commercial and non-commercial tools were tested. The test results show large differences between the tools, but all could improve their support of SRMA, what might contribute to the adoption of ACC in practice.",
"title": ""
},
{
"docid": "e1d3708e826499d7f2e656b66303734f",
"text": "Entity Resolution constitutes a core task for data integration that, due to its quadratic complexity, typically scales to large datasets through blocking methods. These can be configured in two ways. The schema-based configuration relies on schema information in order to select signatures of high distinctiveness and low noise, while the schema-agnostic one treats every token from all attribute values as a signature. The latter approach has significant potential, as it requires no fine-tuning by human experts and it applies to heterogeneous data. Yet, there is no systematic study on its relative performance with respect to the schema-based configuration. This work covers this gap by comparing analytically the two configurations in terms of effectiveness, time efficiency and scalability. We apply them to 9 established blocking methods and to 11 benchmarks of structured data. We provide valuable insights into the internal functionality of the blocking methods with the help of a novel taxonomy. Our studies reveal that the schema-agnostic configuration offers unsupervised and robust definition of blocking keys under versatile settings, trading a higher computational cost for a consistently higher recall than the schema-based one. It also enables the use of state-of-the-art blocking methods without schema knowledge.",
"title": ""
},
{
"docid": "81d4baaf6a22a7a480e4568ae05de1db",
"text": "Procedural textures are normally generated from mathematical models with parameters carefully selected by experienced users. However, for naive users, the intuitive way to obtain a desired texture is to provide semantic descriptions such as ”regular,” ”lacelike,” and ”repetitive” and then a procedural model with proper parameters will be automatically suggested to generate the corresponding textures. By contrast, it is less practical for users to learn mathematical models and tune parameters based on multiple examinations of large numbers of generated textures. In this study, we propose a novel framework that generates procedural textures according to user-defined semantic descriptions, and we establish a mapping between procedural models and semantic texture descriptions. First, based on a vocabulary of semantic attributes collected from psychophysical experiments, a multi-label learning method is employed to annotate a large number of textures with semantic attributes to form a semantic procedural texture dataset. Then, we derive a low dimensional semantic space in which the semantic descriptions can be separated from one other. Finally, given a set of semantic descriptions, the diverse properties of the samples in the semantic space can lead the framework to find an appropriate generation model that uses appropriate parameters to produce a desired texture. The experimental results show that the proposed framework is effective and that the generated textures closely correlate with the input semantic descriptions.",
"title": ""
},
{
"docid": "b4a8541c2870ea3d91819c0c0de68ad3",
"text": "The paper will describe various types of security issues which include confidentality, integrity and availability of data. There exists various threats to security issues traffic analysis, snooping, spoofing, denial of service attack etc. The asymmetric key encryption techniques may provide a higher level of security but compared to the symmetric key encryption Although we have existing techniques symmetric and assymetric key cryptography methods but there exists security concerns. A brief description of proposed framework is defined which uses the random combination of public and private keys. The mechanisms includes: Integrity, Availability, Authentication, Nonrepudiation, Confidentiality and Access control which is achieved by private-private key model as the user is restricted both at sender and reciever end which is restricted in other models. A review of all these systems is described in this paper.",
"title": ""
},
{
"docid": "9edf40bfd6875591543ff46e5e211c74",
"text": "The brain is thought to sense gut stimuli only via the passive release of hormones. This is because no connection has been described between the vagus and the putative gut epithelial sensor cell—the enteroendocrine cell. However, these electrically excitable cells contain several features of epithelial transducers. Using a mouse model, we found that enteroendocrine cells synapse with vagal neurons to transduce gut luminal signals in milliseconds by using glutamate as a neurotransmitter. These synaptically connected enteroendocrine cells are referred to henceforth as neuropod cells. The neuroepithelial circuit they form connects the intestinal lumen to the brainstem in one synapse, opening a physical conduit for the brain to sense gut stimuli with the temporal precision and topographical resolution of a synapse.",
"title": ""
},
{
"docid": "ede1f31a32e59d29ee08c64c1a6ed5f7",
"text": "There are different approaches to the problem of assigning each word of a text with a parts-of-speech tag, which is known as Part-Of-Speech (POS) tagging. In this paper we compare the performance of a few POS tagging techniques for Bangla language, e.g. statistical approach (n-gram, HMM) and transformation based approach (Brill’s tagger). A supervised POS tagging approach requires a large amount of annotated training corpus to tag properly. At this initial stage of POS-tagging for Bangla, we have very limited resource of annotated corpus. We tried to see which technique maximizes the performance with this limited resource. We also checked the performance for English and tried to conclude how these techniques might perform if we can manage a substantial amount of annotated corpus.",
"title": ""
},
{
"docid": "6fa6a26b351c45ac5f33f565bc9c01e8",
"text": "Transfer learning, or inductive transfer, refers to the transfer of knowledge from a source task to a target task. In the context of convolutional neural networks (CNNs), transfer learning can be implemented by transplanting the learned feature layers from one CNN (derived from the source task) to initialize another (for the target task). Previous research has shown that the choice of the source CNN impacts the performance of the target task. In the current literature, there is no principled way for selecting a source CNN for a given target task despite the increasing availability of pre-trained source CNNs. In this paper we investigate the possibility of automatically ranking source CNNs prior to utilizing them for a target task. In particular, we present an information theoretic framework to understand the source-target relationship and use this as a basis to derive an approach to automatically rank source CNNs in an efficient, zero-shot manner. The practical utility of the approach is thoroughly evaluated using the PlacesMIT dataset, MNIST dataset and a real-world MRI database. Experimental results demonstrate the efficacy of the proposed ranking method for transfer learning.",
"title": ""
},
{
"docid": "7bce92a72a19aef0079651c805883eb5",
"text": "Highly realistic virtual human models are rapidly becoming commonplace in computer graphics. These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. This paper studies the problem and solutions to automatic modeling of animatable virtual humans. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewed.",
"title": ""
},
{
"docid": "9a82f33d84cd622ccd66a731fc9755de",
"text": "To discover relationships and associations between pairs of variables in large data sets have become one of the most significant challenges for bioinformatics scientists. To tackle this problem, maximal information coefficient (MIC) is widely applied as a measure of the linear or non-linear association between two variables. To improve the performance of MIC calculation, in this work we present MIC++, a parallel approach based on the heterogeneous accelerators including Graphic Processing Unit (GPU) and Field Programmable Gate Array (FPGA) engines, focusing on both coarse-grained and fine-grained parallelism. As the evaluation of MIC++, we have demonstrated the performance on the state-of-the-art GPU accelerators and the FPGA-based accelerators. Preliminary estimated results show that the proposed parallel implementation can significantly achieve more than 6X-14X speedup using GPU, and 4X-13X using FPGA-based accelerators.",
"title": ""
},
{
"docid": "b0d9c5716052e9cfe9d61d20e5647c8c",
"text": "We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods. In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model. The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss. On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank. On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS (Zoph et al., 2017). Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS.",
"title": ""
},
{
"docid": "ce53bf5131c125fdca2086e28ccca9d7",
"text": "When a firm practices conservative accounting, changes in the amount of its investments can affect the quality of its earnings. Growth in investment reduces reported earnings and creates reserves. Reducing investment releases those reserves, increasing earnings. If the change in investment is temporary, then current earnings is temporarily depressed or inflated, and thus is not a good indicator of future earnings. This study develops diagnostic measures of this joint effect of investment and conservative accounting. We find that these measures forecast differences in future return on net operating assets relative to current return on net operating assets. Moreover, these measures also forecast stock returns-indicating that investors do not appreciate how conservatism and changes in investment combine to raise questions about the quality of reported earnings.",
"title": ""
},
{
"docid": "6e4f71c411a57e3f705dbd0979c118b1",
"text": "BACKGROUND\nStress perception is highly subjective, and so the complexity of nursing practice may result in variation between nurses in their identification of sources of stress, especially when the workplace and roles of nurses are changing, as is currently occurring in the United Kingdom health service. This could have implications for measures being introduced to address problems of stress in nursing.\n\n\nAIMS\nTo identify nurses' perceptions of workplace stress, consider the potential effectiveness of initiatives to reduce distress, and identify directions for future research.\n\n\nMETHOD\nA literature search from January 1985 to April 2003 was conducted using the key words nursing, stress, distress, stress management, job satisfaction, staff turnover and coping to identify research on sources of stress in adult and child care nursing. Recent (post-1997) United Kingdom Department of Health documents and literature about the views of practitioners was also consulted.\n\n\nFINDINGS\nWorkload, leadership/management style, professional conflict and emotional cost of caring have been the main sources of distress for nurses for many years, but there is disagreement as to the magnitude of their impact. Lack of reward and shiftworking may also now be displacing some of the other issues in order of ranking. Organizational interventions are targeted at most but not all of these sources, and their effectiveness is likely to be limited, at least in the short to medium term. Individuals must be supported better, but this is hindered by lack of understanding of how sources of stress vary between different practice areas, lack of predictive power of assessment tools, and a lack of understanding of how personal and workplace factors interact.\n\n\nCONCLUSIONS\nStress intervention measures should focus on stress prevention for individuals as well as tackling organizational issues. Achieving this will require further comparative studies, and new tools to evaluate the intensity of individual distress.",
"title": ""
},
{
"docid": "517a7833e209403cb3db6f3e58c5f3e4",
"text": "Nowadays ontologies present a growing interest in Data Fusion applications. As a matter of fact, the ontologies are seen as a semantic tool for describing and reasoning about sensor data, objects, relations and general domain theories. In addition, uncertainty is perhaps one of the most important characteristics of the data and information handled by Data Fusion. However, the fundamental nature of ontologies implies that ontologies describe only asserted and veracious facts of the world. Different probabilistic, fuzzy and evidential approaches already exist to fill this gap; this paper recaps the most popular tools. However none of the tools meets exactly our purposes. Therefore, we constructed a Dempster-Shafer ontology that can be imported into any specific domain ontology and that enables us to instantiate it in an uncertain manner. We also developed a Java application that enables reasoning about these uncertain ontological instances.",
"title": ""
}
] | scidocsrr |
a2d04f9748040ba26485b311176ecc8a | Very High Frequency PWM Buck Converters Using Monolithic GaN Half-Bridge Power Stages With Integrated Gate Drivers | [
{
"docid": "e09d142b072122da62ebe79650f42cc5",
"text": "This paper describes a synchronous buck converter based on a GaN-on-SiC integrated circuit, which includes a halfbridge power stage, as well as a modified active pull-up gate driver stage. The integrated modified active pull-up driver takes advantage of depletion-mode device characteristics to achieve fast switching with low power consumption. Design principles and results are presented for a synchronous buck converter prototype operating at 100 MHz switching frequency, delivering up to 7 W from 20 V input voltage. Measured power-stage efficiency peaks above 91%, and remains above 85% over a wide range of operating conditions. Experimental results show that the converter has the ability to accurately track a 20 MHz bandwidth LTE envelope signal with 83.7% efficiency.",
"title": ""
},
{
"docid": "3f77b59dc39102eb18e31dbda0578ecb",
"text": "GaN high electron mobility transistors (HEMTs) are well suited for high-frequency operation due to their lower on resistance and device capacitance compared with traditional silicon devices. When grown on silicon carbide, GaN HEMTs can also achieve very high power density due to the enhanced power handling capabilities of the substrate. As a result, GaN-on-SiC HEMTs are increasingly popular in radio-frequency power amplifiers, and applications as switches in high-frequency power electronics are of high interest. This paper explores the use of GaN-on-SiC HEMTs in conventional pulse-width modulated switched-mode power converters targeting switching frequencies in the tens of megahertz range. Device sizing and efficiency limits of this technology are analyzed, and design principles and guidelines are given to exploit the capabilities of the devices. The results are presented for discrete-device and integrated implementations of a synchronous Buck converter, providing more than 10-W output power supplied from up to 40 V with efficiencies greater than 95% when operated at 10 MHz, and greater than 90% at switching frequencies up to 40 MHz. As a practical application of this technology, the converter is used to accurately track a 3-MHz bandwidth communication envelope signal with 92% efficiency.",
"title": ""
}
] | [
{
"docid": "172f206c8b3b0bc0d75793a13fa9ef88",
"text": "Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, ITransF, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets— WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information.",
"title": ""
},
{
"docid": "326cb7464df9c9361be4e27d82f61455",
"text": "We implemented an attack against WEP, the link-layer security protocol for 802.11 networks. The attack was described in a recent paper by Fluhrer, Mantin, and Shamir. With our implementation, and permission of the network administrator, we were able to recover the 128 bit secret key used in a production network, with a passive attack. The WEP standard uses RC4 IVs improperly, and the attack exploits this design failure. This paper describes the attack, how we implemented it, and some optimizations to make the attack more efficient. We conclude that 802.11 WEP is totally insecure, and we provide some recommendations.",
"title": ""
},
{
"docid": "e0633afb6f4dcb1561dbb23b6e3aa713",
"text": "Software security vulnerabilities are one of the critical issues in the realm of computer security. Due to their potential high severity impacts, many different approaches have been proposed in the past decades to mitigate the damages of software vulnerabilities. Machine-learning and data-mining techniques are also among the many approaches to address this issue. In this article, we provide an extensive review of the many different works in the field of software vulnerability analysis and discovery that utilize machine-learning and data-mining techniques. We review different categories of works in this domain, discuss both advantages and shortcomings, and point out challenges and some uncharted territories in the field.",
"title": ""
},
{
"docid": "8da0bdec21267924d16f9a04e6d9a7ef",
"text": "Traffic light timing optimization is still an active line of research despite the wealth of scientific literature on the topic, and the problem remains unsolved for any non-toy scenario. One of the key issues with traffic light optimization is the large scale of the input information that is available for the controlling agent, namely all the traffic data that is continually sampled by the traffic detectors that cover the urban network. This issue has in the past forced researchers to focus on agents that work on localized parts of the traffic network, typically on individual intersections, and to coordinate every individual agent in a multi-agent setup. In order to overcome the large scale of the available state information, we propose to rely on the ability of deep Learning approaches to handle large input spaces, in the form of Deep Deterministic Policy Gradient (DDPG) algorithm. We performed several experiments with a range of models, from the very simple one (one intersection) to the more complex one (a big city section).",
"title": ""
},
{
"docid": "44abac09424c717f3a691e4ba2640c1a",
"text": "In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases.",
"title": ""
},
{
"docid": "48485e967c5aa345a53b91b47cc0e6d0",
"text": "The buccinator musculomucosal flaps are actually considered the main reconstructive option for small-moderate defects of the oral mucosa. In this paper we present our experience with the posteriorly based buccinator musculomucosal flap. A retrospective review was performed of all patients who had had a Bozola flap reconstruction at the Operative Unit of Maxillo-Facial Surgery of Parma, Italy, between 2003 and 2010. The Bozola flap was used in 19 patients. In most cases they had defects of the palate (n=12). All flaps were harvested successfully and no major complications occurred. Minor complications were observed in two cases. At the end of the follow up all patients returned to a normal diet without alterations of speech and swallowing. We consider the Bozola flap the first choice for the reconstruction of defects involving the palate, the cheek and the postero-lateral tongue and floor of the mouth.",
"title": ""
},
{
"docid": "28fcdd3282dd57c760e9e2628764c0f8",
"text": "Constructing a valid measure of presence and discovering the factors that contribute to presence have been much sought after goals of presence researchers and at times have generated controversy among them. This paper describes the results of principal-components analyses of Presence Questionnaire (PQ) data from 325 participants following exposure to immersive virtual environments. The analyses suggest that a 4-factor model provides the best fit to our data. The factors are Involvement, Adaptation/Immersion, Sensory Fidelity, and Interface Quality. Except for the Adaptation/Immersion factor, these factors corresponded to those identified in a cluster analysis of data from an earlier version of the questionnaire. The existence of an Adaptation/Immersion factor leads us to postulate that immersion is greater for those individuals who rapidly and easily adapt to the virtual environment. The magnitudes of the correlations among the factors indicate moderately strong relationships among the 4 factors. Within these relationships, Sensory Fidelity items seem to be more closely related to Involvement, whereas Interface Quality items appear to be more closely related to Adaptation/Immersion, even though there is a moderately strong relationship between the Involvement and Adaptation/Immersion factors.",
"title": ""
},
{
"docid": "b08027d8febf1d7f8393b9934739847d",
"text": "Sarcasm is generally characterized as a figure of speech that involves the substitution of a literal by a figurative meaning, which is usually the opposite of the original literal meaning. We re-frame the sarcasm detection task as a type of word sense disambiguation problem, where the sense of a word is either literal or sarcastic. We call this the Literal/Sarcastic Sense Disambiguation (LSSD) task. We address two issues: 1) how to collect a set of target words that can have either literal or sarcastic meanings depending on context; and 2) given an utterance and a target word, how to automatically detect whether the target word is used in the literal or the sarcastic sense. For the latter, we investigate several distributional semantics methods and show that a Support Vector Machines (SVM) classifier with a modified kernel using word embeddings achieves a 7-10% F1 improvement over a strong lexical baseline.",
"title": ""
},
{
"docid": "653b9148a229bd8b2c1909d98d67e7a4",
"text": "In this work, a beam switched antenna system based on a planar connected antenna array (CAA) is proposed at 28 GHz for 5G applications. The antenna system consists of a 4 × 4 connected slot antenna elements. It is covering frequency band from 27.4 GHz to 28.23 GHz with at least −10dB bandwidth of 830 MHz. It is modeled on a commercially available RO3003 substrate with ∊r equal to 3.3. The dimensions of the board are equal to 61×54×0.13 mm3. The proposed design is compact and low profile. A Butler matrix based feed network is used to steer the beam at different locations.",
"title": ""
},
{
"docid": "fb0b06eb6238c008bef7d3b2e9a80792",
"text": "An N-dimensional image is divided into “object” and “background” segments using a graph cut approach. A graph is formed by connecting all pairs of neighboring image pixels (voxels) by weighted edges. Certain pixels (voxels) have to be a priori identified as object or background seeds providing necessary clues about the image content. Our objective is to find the cheapest way to cut the edges in the graph so that the object seeds are completely separated from the background seeds. If the edge cost is a decreasing function of the local intensity gradient then the minimum cost cut should produce an object/background segmentation with compact boundaries along the high intensity gradient values in the image. An efficient, globally optimal solution is possible via standard min-cut/max-flow algorithms for graphs with two terminals. We applied this technique to interactively segment organs in various 2D and 3D medical images.",
"title": ""
},
{
"docid": "a00fe5032a5e1835120135e6e504d04b",
"text": "Perfect information Monte Carlo (PIMC) search is the method of choice for constructing strong Al systems for trick-taking card games. PIMC search evaluates moves in imperfect information games by repeatedly sampling worlds based on state inference and estimating move values by solving the corresponding perfect information scenarios. PIMC search performs well in trick-taking card games despite the fact that it suffers from the strategy fusion problem, whereby the game's information set structure is ignored because moves are evaluated opportunistically in each world. In this paper we describe imperfect information Monte Carlo (IIMC) search, which aims at mitigating this problem by basing move evaluation on more realistic playout sequences rather than perfect information move values. We show that RecPIMC - a recursive IIMC search variant based on perfect information evaluation - performs considerably better than PIMC search in a large class of synthetic imperfect information games and the popular card game of Skat, for which PIMC search is the state-of-the-art cardplay algorithm.",
"title": ""
},
{
"docid": "1f7bd85c5b28f97565d8b38781e875ab",
"text": "Parental socioeconomic status is among the widely cited factors that has strong association with academic performance of students. Explanatory research design was employed to assess the effects of parents’ socioeconomic status on the academic achievement of students in regional examination. To that end, regional examination result of 538 randomly selected students from thirteen junior secondary schools has been analysed using percentage, independent samples t-tests, Spearman’s rho correlation and one way ANOVA. The results of the analysis revealed that socioeconomic status of parents (particularly educational level and occupational status of parents) has strong association with the academic performance of students. Students from educated and better off families have scored higher result in their regional examination than their counterparts. Being a single parent student and whether parents are living together or not have also a significant impact on the academic performance of students. Parents’ age did not have a significant association with the performance of students.",
"title": ""
},
{
"docid": "e05ea52ecf42826e73ed7095ed162557",
"text": "This paper aims at detecting and recognizing fish species from underwater images by means of Fast R-CNN (Regions with Convolutional Neural and Networks) features. Encouraged by powerful recognition results achieved by Convolutional Neural Networks (CNNs) on generic VOC and ImageNet dataset, we apply this popular deep ConvNets to domain-specific underwater environment which is more complicated than overland situation, using a new dataset of 24277 ImageCLEF fish images belonging to 12 classes. The experimental results demonstrate the promising performance of our networks. Fast R-CNN improves mean average precision (mAP) by 11.2% relative to Deformable Parts Model (DPM) baseline-achieving a mAP of 81.4%, and detects 80× faster than previous R-CNN on a single fish image.",
"title": ""
},
{
"docid": "19acedd03589d1fd1173dd1565d11baf",
"text": "This is the first report on the microbial diversity of xaj-pitha, a rice wine fermentation starter culture through a metagenomics approach involving Illumine-based whole genome shotgun (WGS) sequencing method. Metagenomic DNA was extracted from rice wine starter culture concocted by Ahom community of Assam and analyzed using a MiSeq® System. A total of 2,78,231 contigs, with an average read length of 640.13 bp, were obtained. Data obtained from the use of several taxonomic profiling tools were compared with previously reported microbial diversity studies through the culture-dependent and culture-independent method. The microbial community revealed the existence of amylase producers, such as Rhizopus delemar, Mucor circinelloides, and Aspergillus sp. Ethanol producers viz., Meyerozyma guilliermondii, Wickerhamomyces ciferrii, Saccharomyces cerevisiae, Candida glabrata, Debaryomyces hansenii, Ogataea parapolymorpha, and Dekkera bruxellensis, were found associated with the starter culture along with a diverse range of opportunistic contaminants. The bacterial microflora was dominated by lactic acid bacteria (LAB). The most frequent occurring LAB was Lactobacillus plantarum, Lactobacillus brevis, Leuconostoc lactis, Weissella cibaria, Lactococcus lactis, Weissella para mesenteroides, Leuconostoc pseudomesenteroides, etc. Our study provided a comprehensive picture of microbial diversity associated with rice wine fermentation starter and indicated the superiority of metagenomic sequencing over previously used techniques.",
"title": ""
},
{
"docid": "9f7aaba61ef395f85252820edae5db1b",
"text": "Theory and research on sex differences in adjustment focus largely on parental, societal, and biological influences. However, it also is important to consider how peers contribute to girls' and boys' development. This article provides a critical review of sex differences in several peer relationship processes, including behavioral and social-cognitive styles, stress and coping, and relationship provisions. The authors present a speculative peer-socialization model based on this review in which the implications of these sex differences for girls' and boys' emotional and behavioral development are considered. Central to this model is the idea that sex-linked relationship processes have costs and benefits for girls' and boys' adjustment. Finally, the authors present recent research testing certain model components and propose approaches for testing understudied aspects of the model.",
"title": ""
},
{
"docid": "89ed5dc0feb110eb3abc102c4e50acaf",
"text": "Automatic object detection in infrared images is a vital task for many military defense systems. The high detection rate and low false detection rate of this phase directly affect the performance of the following algorithms in the system as well as the general performance of the system. In this work, a fast and robust algorithm is proposed for detection of small and high intensity objects in infrared scenes. Top-hat transformation and mean filter was used to increase the visibility of the objects, and a two-layer thresholding algorithm was introduced to calculate the object sizes more accurately. Finally, small objects extracted by using post processing methods.",
"title": ""
},
{
"docid": "4ecc49bb99ade138783899b6f9b47f16",
"text": "This paper compares direct reinforcement learning (no explicit model) and model-based reinforcement learning on a simple task: pendulum swing up. We nd that in this task model-based approaches support reinforcement learning from smaller amounts of training data and eecient handling of changing goals.",
"title": ""
},
{
"docid": "f0af0497727f2256aa52b30c3a7f64d1",
"text": "This paper presented a modified particle swarm optimizer algorithm (MPSO). The aggregation degree of the particle swarm was introduced. The particles' diversity was improved through periodically monitoring aggregation degree of the particle swarm. On the later development of the PSO algorithm, it has been taken strategy of the Gaussian mutation to the best particle's position, which enhanced the particles' capacity to jump out of local minima. Several typical benchmark functions with different dimensions have been used for testing. The simulation results show that the proposed method improves the convergence precision and speed of PSO algorithm effectively.",
"title": ""
},
{
"docid": "131c163caef9ab345eada4b2d423aa9d",
"text": "Text pre-processing of Arabic Language is a challenge and crucial stage in Text Categorization (TC) particularly and Text Mining (TM) generally. Stemming algorithms can be employed in Arabic text preprocessing to reduces words to their stems/or root. Arabic stemming algorithms can be ranked, according to three category, as root-based approach (ex. Khoja); stem-based approach (ex. Larkey); and statistical approach (ex. N-Garm). However, no stemming of this language is perfect: The existing stemmers have a small efficiency. In this paper, in order to improve the accuracy of stemming and therefore the accuracy of our proposed TC system, an efficient hybrid method is proposed for stemming Arabic text. The effectiveness of the aforementioned four methods was evaluated and compared in term of the F-measure of the Naïve Bayesian classifier and the Support Vector Machine classifier used in our TC system. The proposed stemming algorithm was found to supersede the other stemming ones: The obtained results illustrate that using the proposed stemmer enhances greatly the performance of Arabic Text Categorization.",
"title": ""
},
{
"docid": "7a62e5a78eabbcbc567d5538a2f35434",
"text": "This paper presents a system for a design and implementation of Optical Arabic Braille Recognition(OBR) with voice and text conversion. The implemented algorithm based on a comparison of Braille dot position extraction in each cell with the database generated for each Braille cell. Many digital image processing have been performed on the Braille scanned document like binary conversion, edge detection, holes filling and finally image filtering before dot extraction. The work in this paper also involved a unique decimal code generation for each Braille cell used as a base for word reconstruction with the corresponding voice and text conversion database. The implemented algorithm achieve expected result through letter and words recognition and transcription accuracy over 99% and average processing time around 32.6 sec per page. using matlab environmemt",
"title": ""
}
] | scidocsrr |
762b69459f5f9cbbb3e67b5bb6528518 | Modellingof a special class of spherical parallel manipulators with Euler parameters | [
{
"docid": "8fa0c59e04193ff1375b3ed544847229",
"text": "In this paper, the problem of workspace analysis of spherical parallel manipulators (SPMs) is addressed with respect to a spherical robotic wrist. The wrist is designed following a modular approach and capable of a unlimited rotation of rolling. An equation dealing with singularity surfaces is derived and branches of the singularity surfaces are identified. By using the Euler parameters, the singularity surfaces are generated in a solid unit sphere, the workspace analysis and dexterity evaluation hence being able to be performed in the confined region of the sphere. Examples of workspace evaluation of the spherical wrist and general SPMs are included to demonstrate the application of the proposed method.",
"title": ""
},
{
"docid": "b427ebf5f9ce8af9383f74dc86819583",
"text": "This paper deals with the in-depth kinematic analysis of a special parallel wrist, called the agile eye. The agile eye is a three-legged spherical parallel robot with revolute joints, in which all pairs of adjacent joint axes are orthogonal. Its most peculiar feature, demonstrated in this paper for the first time, is that its workspace is unlimited and flawed only by six singularity curves (instead of surfaces). These curves correspond to self-motions of the mobile platform and of the legs, or to a lockup configuration. This paper also demonstrates that the four solutions to the direct kinematics of the agile eye (assembly modes) have a simple direct relationship with the eight solutions to the inverse kinematics (working modes)",
"title": ""
}
] | [
{
"docid": "175fa180bc18a59dd6855d469aed91ec",
"text": "A new solution of the inverse kinematics task for a 3-DOF parallel manipulator with a R-P -S joint structure is obtained for a given position of end-effector in the form of simple position equations. Based on this the number of the inverse kinematics task solutions was investigated, in general, equal to four. We identify the size of the manipulator feasible area and simple relationships are found between the position and orientation of the platform. We prove a new theorem stating that, while the end-effector traces a circular horizontal path with its centre at the vertical z-axis, the norm of the joint coordinates vector remains constant.",
"title": ""
},
{
"docid": "d77a8c630e50ed2879cafba7367ed456",
"text": "A survey found the language in use in introductory programming classes in the top U.S. computer science schools.",
"title": ""
},
{
"docid": "99ddcb898895b04f4e86337fe35c1713",
"text": "Emerging self-driving vehicles are vulnerable to different attacks due to the principle and the type of communication systems that are used in these vehicles. These vehicles are increasingly relying on external communication via vehicular ad hoc networks (VANETs). VANETs add new threats to self-driving vehicles that contribute to substantial challenges in autonomous systems. These communication systems render self-driving vehicles vulnerable to many types of malicious attacks, such as Sybil attacks, Denial of Service (DoS), black hole, grey hole and wormhole attacks. In this paper, we propose an intelligent security system designed to secure external communications for self-driving and semi self-driving cars. The proposed scheme is based on Proportional Overlapping Score (POS) to decrease the number of features found in the Kyoto benchmark dataset. The hybrid detection system relies on the Back Propagation neural networks (BP), to detect a common type of attack in VANETs: Denial-of-Service (DoS). The experimental results show that the proposed BP-IDS is capable of identifying malicious vehicles in self-driving and semi self-driving vehicles.",
"title": ""
},
{
"docid": "c7993af6bf01f8b35f5494e5a564d757",
"text": "Microservice Architectures (MA) have the potential to increase the agility of software development. In an era where businesses require software applications to evolve to support emerging software requirements, particularly for Internet of Things (IoT) applications, we examine the issue of microservice granularity and explore its effect upon application latency. Two approaches to microservice deployment are simulated; the first with microservices in a single container, and the second with microservices partitioned across separate containers. We observed a negligible increase in service latency for the multiple container deployment over a single container.",
"title": ""
},
{
"docid": "b0b84a9f7f694dd8d7e0deb1533c4de5",
"text": "Medical institutes use Electronic Medical Record (EMR) to record a series of medical events, including diagnostic information (diagnosis codes), procedures performed (procedure codes) and admission details. Plenty of data mining technologies are applied in the EMR data set for knowledge discovery, which is precious to medical practice. The knowledge found is conducive to develop treatment plans, improve health care and reduce medical expenses, moreover, it could also provide further assistance to predict and control outbreaks of epidemic disease. The growing social value it creates has made it a hot spot for experts and scholars. In this paper, we will summarize the research status of data mining technologies on EMR, and analyze the challenges that EMR research is confronting currently.",
"title": ""
},
{
"docid": "a78caf89bb51dca3a8a95f7736ae1b2b",
"text": "The understanding of sentences involves not only the retrieval of the meaning of single words, but the identification of the relation between a verb and its arguments. The way the brain manages to process word meaning and syntactic relations during language comprehension on-line still is a matter of debate. Here we review the different views discussed in the literature and report data from crucial experiments investigating the temporal and neurotopological parameters of different information types encoded in verbs, i.e. word category information, the verb's argument structure information, the verb's selectional restriction and the morphosyntactic information encoded in the verb's inflection. The neurophysiological indices of the processes dealing with these different information types suggest an initial independence of the processing of word category information from other information types as the basis of local phrase structure building, and a later processing stage during which different information types interact. The relative ordering of the subprocesses appears to be universal, whereas the absolute timing of when during later phrases interaction takes places varies as a function of when the relevant information becomes available. Moreover, the neurophysiological indices for non-local dependency relations vary as a function of the morphological richness of the language.",
"title": ""
},
{
"docid": "e680f8b83e7a2137321cc644724827de",
"text": "A dual-band antenna is developed on a flexible Liquid Crystal Polymer (LCP) substrate for simultaneous operation at 2.45 and 5.8 GHz in high frequency Radio Frequency IDentification (RFID) systems. The response of the low profile double T-shaped slot antenna is preserved when the antenna is placed on platforms such as wood and cardboard, and when bent to conform to a cylindrical plastic box. Furthermore, experiments show that the antenna is still operational when placed at a distance of around 5cm from a metallic surface.",
"title": ""
},
{
"docid": "a8553e9f90e8766694f49dcfdeab83b7",
"text": "The need for solid-state ac-dc converters to improve power quality in terms of power factor correction, reduced total harmonic distortion at input ac mains, and precisely regulated dc output has motivated the investigation of several topologies based on classical converters such as buck, boost, and buck-boost converters. Boost converters operating in continuous-conduction mode have become particularly popular because reduced electromagnetic interference levels result from their utilization. Within this context, this paper introduces a bridgeless boost converter based on a three-state switching cell (3SSC), whose distinct advantages are reduced conduction losses with the use of magnetic elements with minimized size, weight, and volume. The approach also employs the principle of interleaved converters, as it can be extended to a generic number of legs per winding of the autotransformers and high power levels. A literature review of boost converters based on the 3SSC is initially presented so that key aspects are identified. The theoretical analysis of the proposed converter is then developed, while a comparison with a conventional boost converter is also performed. An experimental prototype rated at 1 kW is implemented to validate the proposal, as relevant issues regarding the novel converter are discussed.",
"title": ""
},
{
"docid": "66a49a50b63892a857a40531630be800",
"text": "We present a neural network architecture applied to the problem of refining a dense disparity map generated by a stereo algorithm to which we have no access. Our approach is able to learn which disparity values should be modified and how, from a training set of images, estimated disparity maps and the corresponding ground truth. Its only input at test time is a disparity map and the reference image. Two design characteristics are critical for the success of our network: (i) it is formulated as a recurrent neural network, and (ii) it estimates the output refined disparity map as a combination of residuals computed at multiple scales, that is at different up-sampling and down-sampling rates. The first property allows the network, which we named RecResNet, to progressively improve the disparity map, while the second property allows the corrections to come from different scales of analysis, addressing different types of errors in the current disparity map. We present competitive quantitative and qualitative results on the KITTI 2012 and 2015 benchmarks that surpass the accuracy of previous disparity refinement methods.",
"title": ""
},
{
"docid": "76d1509549ba64157911e6b723f6ebc5",
"text": "A single-stage soft-switching converter is proposed for universal line voltage applications. A boost type of active-clamp circuit is used to achieve zero-voltage switching operation of the power switches. A simple DC-link voltage feedback scheme is applied to the proposed converter. A resonant voltage-doubler rectifier helps the output diodes to achieve zero-current switching operation. The reverse-recovery losses of the output diodes can be eliminated without any additional components. The DC-link capacitor voltage can be reduced, providing reduced voltage stresses of switching devices. Furthermore, power conversion efficiency can be improved by the soft-switching operation of switching devices. The performance of the proposed converter is evaluated on a 160-W (50 V/3.2 A) experimental prototype. The proposed converter complies with International Electrotechnical Commission (IEC) 1000-3-2 Class-D requirements for the light-emitting diode power supply of large-sized liquid crystal displays, maintaining the DC-link capacitor voltage within 400 V under the universal line voltage (90-265 Vrms).",
"title": ""
},
{
"docid": "63b283d40abcccd17b4771535ac000e4",
"text": "Developing agents to engage in complex goaloriented dialogues is challenging partly because the main learning signals are very sparse in long conversations. In this paper, we propose a divide-and-conquer approach that discovers and exploits the hidden structure of the task to enable efficient policy learning. First, given successful example dialogues, we propose the Subgoal Discovery Network (SDN) to divide a complex goal-oriented task into a set of simpler subgoals in an unsupervised fashion. We then use these subgoals to learn a multi-level policy by hierarchical reinforcement learning. We demonstrate our method by building a dialogue agent for the composite task of travel planning. Experiments with simulated and real users show that our approach performs competitively against a state-of-theart method that requires human-defined subgoals. Moreover, we show that the learned subgoals are often human comprehensible.",
"title": ""
},
{
"docid": "1f4ff9d732b3512ee9b105f084edd3d2",
"text": "Today, as Network environments become more complex and cyber and Network threats increase, Organizations use wide variety of security solutions against today's threats. For proper and centralized control and management, range of security features need to be integrated into unified security package. Unified threat management (UTM) as a comprehensive network security solution, integrates all of security services such as firewall, URL filtering, virtual private networking, etc. in a single appliance. PfSense is a variant of UTM, and a customized FreeBSD (Unix-like operating system). Specially is used as a router and statefull firewall. It has many packages extend it's capabilities such as Squid3 package as a as a proxy server that cache data and SquidGuard, redirector and access controller plugin for squid3 proxy server. In this paper, with implementing UTM based on PfSense platform we use Squid3 proxy server and SquidGuard proxy filter to avoid extreme amount of unwanted uploading/ downloading over the internet by users in order to optimize our organization's bandwidth consumption. We begin by defining UTM and types of it, PfSense platform with it's key services and introduce a simple and operational solution for security stability and reducing the cost. Finally, results and statistics derived from this approach compared with the prior condition without PfSense platform.",
"title": ""
},
{
"docid": "a931f939e2e0c0f2f8940796ee23e957",
"text": "PURPOSE OF REVIEW\nMany patients requiring cardiac arrhythmia device surgery are on chronic oral anticoagulation therapy. The periprocedural management of their anticoagulation presents a dilemma to physicians, particularly in the subset of patients with moderate-to-high risk of arterial thromboembolic events. Physicians have responded by treating patients with bridging anticoagulation while oral anticoagulation is temporarily discontinued. However, there are a number of downsides to bridging anticoagulation around device surgery; there is a substantial risk of significant device pocket hematoma with important clinical sequelae; bridging anticoagulation may lead to more arterial thromboembolic events and bridging anticoagulation is expensive.\n\n\nRECENT FINDINGS\nIn response to these issues, a number of centers have explored the option of performing device surgery without cessation of oral anticoagulation. The observational data suggest a greatly reduced hematoma rate with this strategy. Despite these encouraging results, most physicians are reluctant to move to operating on continued Coumadin in the absence of confirmatory data from a randomized trial.\n\n\nSUMMARY\nWe have designed a prospective, single-blind, randomized, controlled trial to address this clinical question. In the conventional arm, patients will be bridged. In the experimental arm, patients will continue on oral anticoagulation and the primary outcome is clinically significant hematoma. Our study has clinical relevance to at least 70 000 patients per year in North America.",
"title": ""
},
{
"docid": "4e50e68e099ab77aedcb0abe8b7a9ca2",
"text": "In the downlink transmission scenario, power allocation and beamforming design at the transmitter are essential when using multiple antenna arrays. This paper considers a multiple input–multiple output broadcast channel to maximize the weighted sum-rate under the total power constraint. The classical weighted minimum mean-square error (WMMSE) algorithm can obtain suboptimal solutions but involves high computational complexity. To reduce this complexity, we propose a fast beamforming design method using unsupervised learning, which trains the deep neural network (DNN) offline and provides real-time service online only with simple neural network operations. The training process is based on an end-to-end method without labeled samples avoiding the complicated process of obtaining labels. Moreover, we use the “APoZ”-based pruning algorithm to compress the network volume, which further reduces the computational complexity and volume of the DNN, making it more suitable for low computation-capacity devices. Finally, the experimental results demonstrate that the proposed method improves computational speed significantly with performance close to the WMMSE algorithm.",
"title": ""
},
{
"docid": "54a35bf200d9af060ce38a9aec972f50",
"text": "The linear preferential attachment hypothesis has been shown to be quite successful in explaining the existence of networks with power-law degree distributions. It is then quite important to determine if this mechanism is the consequence of a general principle based on local rules. In this work it is claimed that an effective linear preferential attachment is the natural outcome of growing network models based on local rules. It is also shown that the local models offer an explanation for other properties like the clustering hierarchy and degree correlations recently observed in complex networks. These conclusions are based on both analytical and numerical results for different local rules, including some models already proposed in the literature.",
"title": ""
},
{
"docid": "e4dc1f30a914dc6f710f23b5bc047978",
"text": "Intelligence, expertise, ability and talent, as these terms have traditionally been used in education and psychology, are socially agreed upon labels that minimize the dynamic, evolving, and contextual nature of individual–environment relations. These hypothesized constructs can instead be described as functional relations distributed across whole persons and particular contexts through which individuals appear knowledgeably skillful. The purpose of this article is to support a concept of ability and talent development that is theoretically grounded in 5 distinct, yet interrelated, notions: ecological psychology, situated cognition, distributed cognition, activity theory, and legitimate peripheral participation. Although talent may be reserved by some to describe individuals possessing exceptional ability and ability may be described as an internal trait, in our description neither ability nor talent are possessed. Instead, they are treated as equivalent terms that can be used to describe functional transactions that are situated across person-in-situation. Further, and more important, by arguing that ability is part of the individual–environment transaction, we take the potential to appear talented out of the hands (or heads) of the few and instead treat it as an opportunity that is available to all although it may be actualized more frequently by some.",
"title": ""
},
{
"docid": "21197ea03a0c9ce6061ea524aca10b52",
"text": "Developers of gamified business applications face the challenge of creating motivating gameplay strategies and creative design techniques to deliver subject matter not typically associated with games in a playful way. We currently have limited models that frame what makes gamification effective (i.e., engaging people with a business application). Thus, we propose a design-centric model and analysis tool for gamification: The kaleidoscope of effective gamification. We take a look at current models of game design, self-determination theory and the principles of systems design to deconstruct the gamification layer in the design of these applications. Based on the layers of our model, we provide design guidelines for effective gamification of business applications.",
"title": ""
},
{
"docid": "2a58426989cbfab0be9e18b7ee272b0a",
"text": "Potholes are a nuisance, especially in the developing world, and can often result in vehicle damage or physical harm to the vehicle occupants. Drivers can be warned to take evasive action if potholes are detected in real-time. Moreover, their location can be logged and shared to aid other drivers and road maintenance agencies. This paper proposes a vehicle-based computer vision approach to identify potholes using a window-mounted camera. Existing literature on pothole detection uses either theoretically constructed pothole models or footage taken from advantageous vantage points at low speed, rather than footage taken from within a vehicle at speed. A distinguishing feature of the work presented in this paper is that a thorough exercise was performed to create an image library of actual and representative potholes under different conditions, and results are obtained using a part of this library. A model of potholes is constructed using the image library, which is used in an algorithmic approach that combines a road colour model with simple image processing techniques such as a Canny filter and contour detection. Using this approach, it was possible to detect potholes with a precision of 81.8% and recall of 74.4.%.",
"title": ""
},
{
"docid": "e68aac3565df039aa431bf2a69e27964",
"text": "region, a five-year-old girl with mild asthma presented to the emergency department of a children’s hospital in acute respiratory distress. She had an 11-day history of cough, rhinorrhea and progressive chest discomfort. She was otherwise healthy, with no history of severe respiratory illness, prior hospital admissions or immu nocompromise. Outside of infrequent use of salbutamol, she was not taking any medications, and her routine childhood immunizations, in cluding conjugate pneumococcal vaccine, were up to date. She had not received the pandemic influenza vaccine because it was not yet available for her age group. The patient had been seen previously at a community health centre a week into her symptoms, and a chest radiograph had shown perihi lar and peribronchial thickening but no focal con solidation, atelectasis or pleural effusion. She had then been reassessed 24 hours later at an influenza assessment centre and empirically started on oseltamivir. Two days later, with the onset of vomiting, diarrhea, fever and progressive shortness of breath, she was brought to the emergency department of the children’s hospital. On examination, she was in considerable distress; her heart rate was 170 beats/min, her respiratory rate was 60 breaths/min and her blood pressure was 117/57 mm Hg. Her oxygen saturations on room air were consistently 70%. On auscultation, she had decreased air entry to the right side with bronchial breath sounds. Repeat chest radiography showed almost complete opacification of the right hemithorax, air bronchograms in the middle and lower lobes, and minimal aeration to the apex. This was felt to be in keeping with whole lung consolidation and parapneumonic effusion. The left lung appeared normal. Blood tests done on admission showed a hemoglobin level of 122 (normal 110–140) g/L, a leukocyte count of 1.5 (normal 5.5–15.5) × 10/L (neutrophils 11% [normal 47%] and bands 19% [normal 5%]) and a platelet count of 92 (normal 217–533) × 10/L. Results of blood tests were otherwise unremarkable. Venous blood gas had a pH level of 7.32 (normal 7.35–7.42), partial pressure of carbon dioxide of 43 (normal 32– 43) mm Hg, a base deficit of 3.6 (normal –2 to 3) mmol/L, and a bicarbonate level of 21.8 (normal 21–26) mmol/L. The initial serum creatinine level was 43.0 (normal < 36) μmol/L and the urea level was 6.5 (normal 2.0–7.0) mmol/L, with no clinical evidence of renal dysfunction. Given the patient’s profound increased work of breathing, she was admitted to the intensive care unit (ICU), where intubation was required because of her continued decline over the next 24 hours. Blood cultures taken on admission were negative. Nasopharyngeal aspirates were negative on rapid respiratory viral testing, but antiviral treatment for presumed pandemic (H1N1) influenza was continued given her clinical presentation, the prevalence of pandemic influenza in the community and the low sensitivity of the test in the range of only 62%. Viral cultures were not done. Empiric treatment with intravenous cefotaxime (200 mg/kg/d) and vancomycin (40 mg/kg/d) was started in the ICU for broad antimicrobial coverage, including possible Cases",
"title": ""
},
{
"docid": "eef7ce5b4268054ed6c7de7fdbbf003e",
"text": "This paper proposes a new closed-loop synchronization algorithm, PLL (Phase-Locked Loop), for applications in power conditioner systems for single-phase networks. The structure presented is based on the correlation of the input signal with a complex signal generated from the use of an adaptive filter in a PLL algorithm in order to minimize the computational effort. Moreover, the adapted PLL presents a higher level of rejection for two particular disturbances: interharmonic and subharmonic, when compared to the original algorithm. Simulation and experimental results will be presented in order to prove the efficacy of the proposed adaptive algorithm.",
"title": ""
}
] | scidocsrr |
4c522ee75323641bcadf9828b7bb7acc | A Snapback Suppressed Reverse-Conducting IGBT With a Floating p-Region in Trench Collector | [
{
"docid": "1d6c4f6efccb211ced52dbed51b0be22",
"text": "In this paper, an advanced Reverse Conducting (RC) IGBT concept is presented. The new technology is referred to as the Bi-mode Insulated Gate Transistor (BIGT) implying that the device can operate at the same current densities in transistor (IGBT) mode and freewheeling diode mode by utilizing the same available silicon volume in both operational modes. The BIGT design concept differs from that of the standard RC-IGBT while targeting to fully replace the state-of-the-art two-chip IGBT/Diode approach with a single chip. The BIGT is also capable of improving the over-all performance especially under hard switching conditions.",
"title": ""
},
{
"docid": "79ff4bd891538a0d1b5a002d531257f2",
"text": "Reverse conducting IGBTs are fabricated in a large productive volume for soft switching applications, such as inductive heaters, microwave ovens or lamp ballast, since several years. To satisfy the requirements of hard switching applications, such as inverters in refrigerators, air conditioners or general purpose drives, the reverse recovery behavior of the integrated diode has to be optimized. Two promising concepts for such an optimization are based on a reduction of the charge- carrier lifetime or the anti-latch p+ implantation dose. It is shown that a combination of both concepts will lead to a device with a good reverse recovery behavior, low forward and reverse voltage drop and excellent over current turn- off capability of a trench field-stop IGBT.",
"title": ""
}
] | [
{
"docid": "f437f971d7d553b69d438a469fd26d41",
"text": "This paper introduces a single-chip, 200 200element sensor array implemented in a standard two-metal digital CMOS technology. The sensor is able to grab the fingerprint pattern without any use of optical and mechanical adaptors. Using this integrated sensor, the fingerprint is captured at a rate of 10 F/s by pressing the finger skin onto the chip surface. The fingerprint pattern is sampled by capacitive sensors that detect the electric field variation induced by the skin surface. Several design issues regarding the capacitive sensing problem are reported and the feedback capacitive sensing scheme (FCS) is introduced. More specifically, the problem of the charge injection in MOS switches has been revisited for charge amplifier design.",
"title": ""
},
{
"docid": "07ce1301392e18c1426fd90507dc763f",
"text": "The fluorescent lamp lifetime is very dependent of the start-up lamp conditions. The lamp filament current and temperature during warm-up and at steady-state operation are important to extend the life of a hot-cathode fluorescent lamp, and the preheating circuit is responsible for attending to the start-up lamp requirements. The usual solution for the preheating circuit used in self-oscillating electronic ballasts is simple and presents a low cost. However, the performance to extend the lamp lifetime is not the most effective. This paper presents an effective preheating circuit for self-oscillating electronic ballasts as an alternative to the usual solution.",
"title": ""
},
{
"docid": "10b4d77741d40a410b30b0ba01fae67f",
"text": "While glucosamine supplementation is very common and a multitude of commercial products are available, there is currently limited information available to assist the equine practitioner in deciding when and how to use these products. Low bioavailability of orally administered glucosamine, poor product quality, low recommended doses, and a lack of scientific evidence showing efficacy of popular oral joint supplements are major concerns. Authors’ addresses: Rolling Thunder Veterinary Services, 225 Roxbury Road, Garden City, NY 11530 (Oke); Ontario Veterinary College, Department of Clinical Studies, University of Guelph, Guelph, Ontario, Canada N1G 2W1 (Weese); e-mail: rollingthunder@optonline.net (Oke). © 2006 AAEP.",
"title": ""
},
{
"docid": "58039fbc0550c720c4074c96e866c025",
"text": "We argue that to best comprehend many data sets, plotting judiciously selected sample statistics with associated confidence intervals can usefully supplement, or even replace, standard hypothesis-testing procedures. We note that most social science statistics textbooks limit discussion of confidence intervals to their use in between-subject designs. Our central purpose in this article is to describe how to compute an analogous confidence interval that can be used in within-subject designs. This confidence interval rests on the reasoning that because between-subject variance typically plays no role in statistical analyses of within-subject designs, it can legitimately be ignored; hence, an appropriate confidence interval can be based on the standard within-subject error term-that is, on the variability due to the subject × condition interaction. Computation of such a confidence interval is simple and is embodied in Equation 2 on p. 482 of this article. This confidence interval has two useful properties. First, it is based on the same error term as is the corresponding analysis of variance, and hence leads to comparable conclusions. Second, it is related by a known factor (√2) to a confidence interval of the difference between sample means; accordingly, it can be used to infer the faith one can put in some pattern of sample means as a reflection of the underlying pattern of population means. These two properties correspond to analogous properties of the more widely used between-subject confidence interval.",
"title": ""
},
{
"docid": "91c57b7a9dd2555e92b5ffa1f5a21790",
"text": "This article presents suggestions for nurses to gain skill, competence, and comfort in caring for critically ill patients receiving mechanical ventilatory support, with a specific focus on education strategies and building communication skills with these challenging nonverbal patients. Engaging in evidence-based practice projects at the unit level and participating in or leading research studies are key ways nurses can contribute to improving outcomes for patients receiving mechanical ventilation. Suggestions are offered for evidence-based practice projects and possible research studies to improve outcomes and advance the science in an effort to achieve quality patient-ventilator management in intensive care units.",
"title": ""
},
{
"docid": "0a7673d423c9134fb96bb3bb5b286433",
"text": "In this contribution the development, design, fabrication and test of a highly integrated broadband multifunctional chip is presented. The MMIC covers the C-, X-and Ku- Band and it is suitable for applications in high performance Transmit/Receive Modules. In less than 26 mm2, the MMIC embeds several T/R switches, low noise/medium power amplifiers, a stepped phase shifter and analog/digital attenuators in order to perform the RF signal routing and phase/amplitude conditioning. Besides, an embedded serial-to-parallel converter drives the phase shifter and the digital attenuator leading to a reduction in complexity of the digital control interface.",
"title": ""
},
{
"docid": "655a95191700e24c6dcd49b827de4165",
"text": "With the increasing demand for express delivery, a courier needs to deliver many tasks in one day and it's necessary to deliver punctually as the customers expect. At the same time, they want to schedule the delivery tasks to minimize the total time of a courier's one-day delivery, considering the total travel time. However, most of scheduling researches on express delivery focus on inter-city transportation, and they are not suitable for the express delivery to customers in the “last mile”. To solve the issue above, this paper proposes a personalized service for scheduling express delivery, which not only satisfies all the customers' appointment time but also makes the total time minimized. In this service, personalized and accurate travel time estimation is important to guarantee delivery punctuality when delivering shipments. Therefore, the personalized scheduling service is designed to consist of two basic services: (1) personalized travel time estimation service for any path in express delivery using courier trajectories, (2) an express delivery scheduling service considering multiple factors, including customers' appointments, one-day delivery costs, etc., which is based on the accurate travel time estimation provided by the first service. We evaluate our proposed service based on extensive experiments, using GPS trajectories generated by more than 1000 couriers over a period of two months in Beijing. The results demonstrate the effectiveness and efficiency of our method.",
"title": ""
},
{
"docid": "95f57e37d04b6b3b8c9ce29ebf23d345",
"text": "Finite state machines (FSMs) are the backbone of sequential circuit design. In this paper, a new FSM watermarking scheme is proposed by making the authorship information a non-redundant property of the FSM. To overcome the vulnerability to state removal attack and minimize the design overhead, the watermark bits are seamlessly interwoven into the outputs of the existing and free transitions of state transition graph (STG). Unlike other transition-based STG watermarking, pseudo input variables have been reduced and made functionally indiscernible by the notion of reserved free literal. The assignment of reserved literals is exploited to minimize the overhead of watermarking and make the watermarked FSM fallible upon removal of any pseudo input variable. A direct and convenient detection scheme is also proposed to allow the watermark on the FSM to be publicly detectable. Experimental results on the watermarked circuits from the ISCAS'89 and IWLS'93 benchmark sets show lower or acceptably low overheads with higher tamper resilience and stronger authorship proof in comparison with related watermarking schemes for sequential functions.",
"title": ""
},
{
"docid": "6c11bb11540719ad64e98bb67cd9a798",
"text": "Opium poppy (Papaver somniferum) produces a large number of benzylisoquinoline alkaloids, including the narcotic analgesics morphine and codeine, and has emerged as one of the most versatile model systems to study alkaloid metabolism in plants. As summarized in this review, we have taken a holistic strategy—involving biochemical, cellular, molecular genetic, genomic, and metabolomic approaches—to draft a blueprint of the fundamental biological platforms required for an opium poppy cell to function as an alkaloid factory. The capacity to synthesize and store alkaloids requires the cooperation of three phloem cell types—companion cells, sieve elements, and laticifers—in the plant, but also occurs in dedifferentiated cell cultures. We have assembled an opium poppy expressed sequence tag (EST) database based on the attempted sequencing of more than 30,000 cDNAs from elicitor-treated cell culture, stem, and root libraries. Approximately 23,000 of the elicitor-induced cell culture and stem ESTs are represented on a DNA microarray, which has been used to examine changes in transcript profile in cultured cells in response to elicitor treatment, and in plants with different alkaloid profiles. Fourier transform-ion cyclotron resonance mass spectrometry and proton nuclear magnetic resonance mass spectroscopy are being used to detect corresponding differences in metabolite profiles. Several new genes involved in the biosynthesis and regulation of alkaloid pathways in opium poppy have been identified using genomic tools. A biological blueprint for alkaloid production coupled with the emergence of reliable transformation protocols has created an unprecedented opportunity to alter the chemical profile of the world’s most valuable medicinal plant.",
"title": ""
},
{
"docid": "a0ebefc5137a1973e1d1da2c478de57c",
"text": "This paper presents BOTTA, the first Arabic dialect chatbot. We explore the challenges of creating a conversational agent that aims to simulate friendly conversations using the Egyptian Arabic dialect. We present a number of solutions and describe the different components of the BOTTA chatbot. The BOTTA database files are publicly available for researchers working on Arabic chatbot technologies. The BOTTA chatbot is also publicly available for any users who want to chat with it online.",
"title": ""
},
{
"docid": "f651d8505f354fe0ad8e0866ca64e6e1",
"text": "Building on existing categorical accounts of natural language semantics, we propose a compositional distributional model of ambiguous meaning. Originally inspired by the high-level category theoretic language of quantum information protocols, the compositional, distributional categorical model provides a conceptually motivated procedure to compute the meaning of a sentence, given its grammatical structure and an empirical derivation of the meaning of its parts. Grammar is given a type-logical description in a compact closed category while the meaning of words is represented in a finite inner product space model. Since the category of finite-dimensional Hilbert spaces is also compact closed, the type-checking deduction process lifts to a concrete meaning-vector computation via a strong monoidal functor between the two categories. The advantage of reasoning with these structures is that grammatical composition admits an interpretation in terms of flow of meaning between words. Pushing the analogy with quantum mechanics further, we describe ambiguous words as statistical ensembles of unambiguous concepts and extend the semantics of the previous model to a category that supports probabilistic mixing. We introduce two different Frobenius algebras representing different ways of composing the meaning of words, and discuss their properties. We conclude with a range of applications to the case of definitions, including a meaning update rule that reconciles the meaning of an ambiguous word with that of its definition.",
"title": ""
},
{
"docid": "d5c57af0f7ab41921ddb92a5de31c33a",
"text": "This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness.",
"title": ""
},
{
"docid": "be83224a853fd65808def16ff20e9c02",
"text": "Cascades of information-sharing are a primary mechanism by which content reaches its audience on social media, and an active line of research has studied how such cascades, which form as content is reshared from person to person, develop and subside. In this paper, we perform a large-scale analysis of cascades on Facebook over significantly longer time scales, and find that a more complex picture emerges, in which many large cascades recur, exhibiting multiple bursts of popularity with periods of quiescence in between. We characterize recurrence by measuring the time elapsed between bursts, their overlap and proximity in the social network, and the diversity in the demographics of individuals participating in each peak. We discover that content virality, as revealed by its initial popularity, is a main driver of recurrence, with the availability of multiple copies of that content helping to spark new bursts. Still, beyond a certain popularity of content, the rate of recurrence drops as cascades start exhausting the population of interested individuals. We reproduce these observed patterns in a simple model of content recurrence simulated on a real social network. Using only characteristics of a cascade’s initial burst, we demonstrate strong performance in predicting whether it will recur in the future.",
"title": ""
},
{
"docid": "5b50e84437dc27f5b38b53d8613ae2c7",
"text": "We present a practical vision-based robotic bin-picking sy stem that performs detection and 3D pose estimation of objects in an unstr ctu ed bin using a novel camera design, picks up parts from the bin, and p erforms error detection and pose correction while the part is in the gri pper. Two main innovations enable our system to achieve real-time robust a nd accurate operation. First, we use a multi-flash camera that extracts rob ust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliabl y detect objects and estimate their poses. FDCM improves the accuracy of cham fer atching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges , a 3D distance transform, and directional integral images. We empiricall y show that these speedups, combined with the use of bounds in the spatial and h ypothesis domains, give the algorithm sublinear computational compl exity. We also apply our FDCM method to other applications in the context of deformable and articulated shape matching. In addition to significantl y improving upon the accuracy of previous chamfer matching methods in all of t he evaluated applications, FDCM is up to two orders of magnitude faster th an the previous methods.",
"title": ""
},
{
"docid": "e099186ceed71e03276ab168ecf79de7",
"text": "Twelve patients with deafferentation pain secondary to central nervous system lesions were subjected to chronic motor cortex stimulation. The motor cortex was mapped as carefully as possible and the electrode was placed in the region where muscle twitch of painful area can be observed with the lowest threshold. 5 of the 12 patients reported complete absence of previous pain with intermittent stimulation at 1 year following the initiation of this therapy. Improvements in hemiparesis was also observed in most of these patients. The pain of these patients was typically barbiturate-sensitive and morphine-resistant. Another 3 patients had some degree of residual pain but considerable reduction of pain was still obtained by stimulation. Thus, 8 of the 12 patients (67%) had continued effect of this therapy after 1 year. In 3 patients, revisions of the electrode placement were needed because stimulation became incapable of inducing muscle twitch even with higher stimulation intensity. The effect of stimulation on pain and capability of producing muscle twitch disappeared simultaneously in these cases and the effect reappeared after the revisions, indicating that appropriate stimulation of the motor cortex is definitely necessary for obtaining satisfactory pain control in these patients. None of the patients subjected to this therapy developed neither observable nor electroencephalographic seizure activity.",
"title": ""
},
{
"docid": "39b7ab83a6a0d75b1ec28c5ff485b98d",
"text": "Video object segmentation is a fundamental step in many advanced vision applications. Most existing algorithms are based on handcrafted features such as HOG, super-pixel segmentation or texturebased techniques, while recently deep features have been found to be more efficient. Existing algorithms observe performance degradation in the presence of challenges such as illumination variations, shadows, and color camouflage. To handle these challenges we propose a fusion based moving object segmentation algorithm which exploits color as well as depth information using GAN to achieve more accuracy. Our goal is to segment moving objects in the presence of challenging background scenes, in real environments. To address this problem, GAN is trained in an unsupervised manner on color and depth information independently with challenging video sequences. During testing, the trained GAN generates backgrounds similar to that in the test sample. The generated background samples are then compared with the test sample to segment moving objects. The final result is computed by fusion of object boundaries in both modalities, RGB and the depth. The comparison of our proposed algorithm with five state-of-the-art methods on publicly available dataset has shown the strength of our algorithm for moving object segmentation in videos in the presence of challenging real scenarios.",
"title": ""
},
{
"docid": "bfd57465a5d6f85fb55ffe13ef79f3a5",
"text": "We investigate the utility of different auxiliary objectives and training strategies within a neural sequence labeling approach to error detection in learner writing. Auxiliary costs provide the model with additional linguistic information, allowing it to learn general-purpose compositional features that can then be exploited for other objectives. Our experiments show that a joint learning approach trained with parallel labels on in-domain data improves performance over the previous best error detection system. While the resulting model has the same number of parameters, the additional objectives allow it to be optimised more efficiently and achieve better performance.",
"title": ""
},
{
"docid": "31756ac6aaa46df16337dbc270831809",
"text": "Broadly speaking, the goal of neuromorphic engineering is to build computer systems that mimic the brain. Spiking Neural Network (SNN) is a type of biologically-inspired neural networks that perform information processing based on discrete-time spikes, different from traditional Artificial Neural Network (ANN). Hardware implementation of SNNs is necessary for achieving high-performance and low-power. We present the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on SNN implemented with digitallogic, supporting a maximum of 2048 neurons, 20482 = 4194304 synapses, and 15 possible synaptic delays. The Darwin NPU was fabricated by standard 180 nm CMOS technology with an area size of 5 ×5 mm2 and 70 MHz clock frequency at the worst case. It consumes 0.84 mW/MHz with 1.8 V power supply for typical applications. Two prototype applications are used to demonstrate the performance and efficiency of the hardware implementation. 脉冲神经网络(SNN)是一种基于离散神经脉冲进行信息处理的人工神经网络。本文提出的“达尔文”芯片是一款基于SNN的类脑硬件协处理器。它支持神经网络拓扑结构,神经元与突触各种参数的灵活配置,最多可支持2048个神经元,四百万个神经突触及15个不同的突触延迟。该芯片采用180纳米CMOS工艺制造,面积为5x5平方毫米,最坏工作频率达到70MHz,1.8V供电下典型应用功耗为0.84mW/MHz。基于该芯片实现了两个应用案例,包括手写数字识别和运动想象脑电信号分类。",
"title": ""
},
{
"docid": "073eb81bbd654b90e6a7ffce608f8ea2",
"text": "OBJECTIVE\nTo examine factors associated with variation in the risk for type 2 diabetes in women with prior gestational diabetes mellitus (GDM).\n\n\nRESEARCH DESIGN AND METHODS\nWe conducted a systematic literature review of articles published between January 1965 and August 2001, in which subjects underwent testing for GDM and then testing for type 2 diabetes after delivery. We abstracted diagnostic criteria for GDM and type 2 diabetes, cumulative incidence of type 2 diabetes, and factors that predicted incidence of type 2 diabetes.\n\n\nRESULTS\nA total of 28 studies were examined. After the index pregnancy, the cumulative incidence of diabetes ranged from 2.6% to over 70% in studies that examined women 6 weeks postpartum to 28 years postpartum. Differences in rates of progression between ethnic groups was reduced by adjustment for various lengths of follow-up and testing rates, so that women appeared to progress to type 2 diabetes at similar rates after a diagnosis of GDM. Cumulative incidence of type 2 diabetes increased markedly in the first 5 years after delivery and appeared to plateau after 10 years. An elevated fasting glucose level during pregnancy was the risk factor most commonly associated with future risk of type 2 diabetes.\n\n\nCONCLUSIONS\nConversion of GDM to type 2 diabetes varies with the length of follow-up and cohort retention. Adjustment for these differences reveals rapid increases in the cumulative incidence occurring in the first 5 years after delivery for different racial groups. Targeting women with elevated fasting glucose levels during pregnancy may prove to have the greatest effect for the effort required.",
"title": ""
},
{
"docid": "1ebb46b4c9e32423417287ab26cae14b",
"text": "Two field studies explored the relationship between self-awareness and transgressive behavior. In the first study, 363 Halloween trick-or-treaters were instructed to only take one candy. Self-awareness induced by the presence of a mirror placed behind the candy bowl decreased transgression rates for children who had been individuated by asking them their name and address, but did not affect the behavior of children left anonymous. Self-awareness influenced older but not younger children. Naturally occurring standards instituted by the behavior of the first child to approach the candy bowl in each group were shown to interact with the experimenter's verbally stated standard. The behavior of 349 subjects in the second study replicated the findings in the first study. Additionally, when no standard was stated by the experimenter, children took more candy when not self-aware than when self-aware.",
"title": ""
}
] | scidocsrr |
80b19612fbeafc0b6aa6df7c466c8d11 | Relative Camera Pose Estimation Using Convolutional Neural Networks | [
{
"docid": "4d7cbe7f5e854028277f0120085b8977",
"text": "In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.",
"title": ""
}
] | [
{
"docid": "a4d7596cfcd4a9133c5677a481c88cf0",
"text": "The understanding of where humans look in a scene is a problem of great interest in visual perception and computer vision. When eye-tracking devices are not a viable option, models of human attention can be used to predict fixations. In this paper we give two contribution. First, we show a model of visual attention that is simply based on deep convolutional neural networks trained for object classification tasks. A method for visualizing saliency maps is defined which is evaluated in a saliency prediction task. Second, we integrate the information of these maps with a bottom-up differential model of eye-movements to simulate visual attention scanpaths. Results on saliency prediction and scores of similarity with human scanpaths demonstrate the effectiveness of this model.",
"title": ""
},
{
"docid": "ce63aad5288d118eb6ca9d99b96e9cac",
"text": "Unknown malware has increased dramatically, but the existing security software cannot identify them effectively. In this paper, we propose a new malware detection and classification method based on n-grams attribute similarity. We extract all n-grams of byte codes from training samples and select the most relevant as attributes. After calculating the average value of attributes in malware and benign separately, we determine a test sample is malware or benign by attribute similarity between attributes of the test sample and the two average attributes of malware and benign. We compare our method with a variety of machine learning methods, including Naïve Bayes, Bayesian Networks, Support Vector Machine and C4.5 Decision Tree. Experimental results on public (Open Malware Benchmark) and private (self-collected) datasets both reveal that our method outperforms the other four methods.",
"title": ""
},
{
"docid": "33b8012ae66f07c9de158f4c514c4e99",
"text": "Many mathematicians have a dismissive attitude towards paradoxes. This is unfortunate, because many paradoxes are rich in content, having connections with serious mathematical ideas as well as having pedagogical value in teaching elementary logical reasoning. An excellent example is the so-called “surprise examination paradox” (described below), which is an argument that seems at first to be too silly to deserve much attention. However, it has inspired an amazing variety of philosophical and mathematical investigations that have in turn uncovered links to Gödel’s incompleteness theorems, game theory, and several other logical paradoxes (e.g., the liar paradox and the sorites paradox). Unfortunately, most mathematicians are unaware of this because most of the literature has been published in philosophy journals.",
"title": ""
},
{
"docid": "91f20c48f5a4329260aadb87a0d8024c",
"text": "In this paper, we survey key design for manufacturing issues for extreme scaling with emerging nanolithography technologies, including double/multiple patterning lithography, extreme ultraviolet lithography, and electron-beam lithography. These nanolithography and nanopatterning technologies have different manufacturing processes and their unique challenges to very large scale integration (VLSI) physical design, mask synthesis, and so on. It is essential to have close VLSI design and underlying process technology co-optimization to achieve high product quality (power/performance, etc.) and yield while making future scaling cost-effective and worthwhile. Recent results and examples will be discussed to show the enablement and effectiveness of such design and process integration, including lithography model/analysis, mask synthesis, and lithography friendly physical design.",
"title": ""
},
{
"docid": "c3dd3dd59afe491fcc6b4cd1e32c88a3",
"text": "The Semantic Web drives towards the use of the Web for interacting with logically interconnected data. Through knowledge models such as Resource Description Framework (RDF), the Semantic Web provides a unifying representation of richly structured data. Adding logic to the Web implies the use of rules to make inferences, choose courses of action, and answer questions. This logic must be powerful enough to describe complex properties of objects but not so powerful that agents can be tricked by being asked to consider a paradox. The Web has several characteristics that can lead to problems when existing logics are used, in particular, the inconsistencies that inevitably arise due to the openness of the Web, where anyone can assert anything. N3Logic is a logic that allows rules to be expressed in a Web environment. It extends RDF with syntax for nested graphs and quantified variables and with predicates for implication and accessing resources on the Web, and functions including cryptographic, string, math. The main goal of N3Logic is to be a minimal extension to the RDF data model such that the same language can be used for logic and data. In this paper, we describe N3Logic and illustrate through examples why it is an appropriate logic for the Web.",
"title": ""
},
{
"docid": "46ab85859bd3966b243db79696a236f0",
"text": "The general purpose optimization method known as Particle Swarm Optimization (PSO) has received much attention in past years, with many attempts to find the variant that performs best on a wide variety of optimization problems. The focus of past research has been with making the PSO method more complex, as this is frequently believed to increase its adaptability to other optimization problems. This study takes the opposite approach and simplifies the PSO method. To compare the efficacy of the original PSO and the simplified variant here, an easy technique is presented for efficiently tuning their behavioural parameters. The technique works by employing an overlaid meta-optimizer, which is capable of simultaneously tuning parameters with regard to multiple optimization problems, whereas previous approaches to meta-optimization have tuned behavioural parameters to work well on just a single optimization problem. It is then found that the PSO method and its simplified variant not only have comparable performance for optimizing a number of Artificial Neural Network problems, but the simplified variant appears to offer a small improvement in some cases.",
"title": ""
},
{
"docid": "466bb7b70fc1c5973fbea3ade7ebd845",
"text": "High-speed and heavy-load stacking robot technology is a common key technique in nonferrous metallurgy areas. Specific layer stacking robot of aluminum ingot continuous casting production line, which has four-DOF, is designed in this paper. The kinematics model is built and studied in detail by D-H method. The transformation matrix method is utilized to solve the kinematics equation of robot. Mutual motion relations between each joint variables and the executive device of robot is got. The kinematics simulation of the robot is carried out via the ADAMS-software. The results of simulation verify the theoretical analysis and lay the foundation for following static and dynamic characteristics analysis of the robot.",
"title": ""
},
{
"docid": "ac0b562db18fac38663b210f599c2deb",
"text": "This paper proposes a fast and stable image-based modeling method which generates 3D models with high-quality face textures in a semi-automatic way. The modeler guides untrained users to quickly obtain 3D model data via several steps of simple user interface operations using predefined 3D primitives. The proposed method contains an iterative non-linear error minimization technique in the model estimation step with an error function based on finite line segments instead of infinite lines. The error corresponds to the difference between the observed structure and the predicted structure from current model parameters. Experimental results on real images validate the robustness and the accuracy of the algorithm. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "77cea98467305b9b3b11de8d3cec6ec2",
"text": "NoSQL and especially graph databases are constantly gaining popularity among developers of Web 2.0 applications as they promise to deliver superior performance when handling highly interconnected data compared to traditional relational databases. Apache Shindig is the reference implementation for OpenSocial with its highly interconnected data model. However, the default back-end is based on a relational database. In this paper we describe our experiences with a different back-end based on the graph database Neo4j and compare the alternatives for querying data with each other and the JPA-based sample back-end running on MySQL. Moreover, we analyze why the different approaches often may yield such diverging results concerning throughput. The results show that the graph-based back-end can match and even outperform the traditional JPA implementation and that Cypher is a promising candidate for a standard graph query language, but still leaves room for improvements.",
"title": ""
},
{
"docid": "e78d53a2790ac3b6011910f82cefaff9",
"text": "A two-dimensional crystal of molybdenum disulfide (MoS2) monolayer is a photoluminescent direct gap semiconductor in striking contrast to its bulk counterpart. Exfoliation of bulk MoS2 via Li intercalation is an attractive route to large-scale synthesis of monolayer crystals. However, this method results in loss of pristine semiconducting properties of MoS2 due to structural changes that occur during Li intercalation. Here, we report structural and electronic properties of chemically exfoliated MoS2. The metastable metallic phase that emerges from Li intercalation was found to dominate the properties of as-exfoliated material, but mild annealing leads to gradual restoration of the semiconducting phase. Above an annealing temperature of 300 °C, chemically exfoliated MoS2 exhibit prominent band gap photoluminescence, similar to mechanically exfoliated monolayers, indicating that their semiconducting properties are largely restored.",
"title": ""
},
{
"docid": "7e6b6f8bab3172457473d158960688a7",
"text": "BACKGROUND\nCancer is a leading cause of death worldwide. Given the complexity of caring work, recent studies have focused on the professional quality of life of oncology nurses. China, the world's largest developing country, faces heavy burdens of care for cancer patients. Chinese oncology nurses may be encountering the negative side of their professional life. However, studies in this field are scarce, and little is known about the prevalence and predictors of oncology nurses' professional quality of life.\n\n\nOBJECTIVES\nTo describe and explore the prevalence of predictors of professional quality of life (compassion fatigue, burnout and compassion satisfaction) among Chinese oncology nurses under the guidance of two theoretical models.\n\n\nDESIGN\nA cross-sectional design with a survey.\n\n\nSETTINGS\nTen tertiary hospitals and five secondary hospitals in Shanghai, China.\n\n\nPARTICIPANTS\nA convenience and cluster sample of 669 oncology nurses was used. All of the nurses worked in oncology departments and had over 1 year of oncology nursing experience. Of the selected nurses, 650 returned valid questionnaires that were used for statistical analyses.\n\n\nMETHODS\nThe participants completed the demographic and work-related questionnaire, the Chinese version of the Professional Quality of Life Scale for Nurses, the Chinese version of the Jefferson Scales of Empathy, the Simplified Coping Style Questionnaire, the Perceived Social Support Scale, and the Chinese Big Five Personality Inventory brief version. Descriptive statistics, t-tests, one-way analysis of variance, simple and multiple linear regressions were used to determine the predictors of the main research variables.\n\n\nRESULTS\nHigher compassion fatigue and burnout were found among oncology nurses who had more years of nursing experience, worked in secondary hospitals and adopted passive coping styles. Cognitive empathy, training and support from organizations were identified as significant protectors, and 'perspective taking' was the strongest predictor of compassion satisfaction, explaining 23.0% of the variance. Personality traits of openness and conscientiousness were positively associated with compassion satisfaction, while neuroticism was a negative predictor, accounting for 24.2% and 19.8% of the variance in compassion fatigue and burnout, respectively.\n\n\nCONCLUSIONS\nOncology care has unique features, and oncology nurses may suffer from more work-related stressors compared with other types of nurses. Various predictors can influence the professional quality of life, and some of these should be considered in the Chinese nursing context. The results may provide clues to help nurse administrators identify oncology nurses' vulnerability to compassion fatigue and burnout and develop comprehensive strategies to improve their professional quality of life.",
"title": ""
},
{
"docid": "a2fa1d74fcaa6891e1a43dca706015b0",
"text": "Smart meters have been deployed worldwide in recent years that enable real-time communications and networking capabilities in power distribution systems. Problematically, recent reports have revealed incidents of energy theft in which dishonest customers would lower their electricity bills (aka stealing electricity) by tampering with their meters. The physical attack can be extended to a network attack by means of false data injection (FDI). This paper is thus motivated to investigate the currently-studied FDI attack by introducing the combination sum of energy profiles (CONSUMER) attack in a coordinated manner on a number of customers' smart meters, which results in a lower energy consumption reading for the attacker and a higher reading for the others in a neighborhood. We propose a CONSUMER attack model that is formulated into one type of coin change problems, which minimizes the number of compromised meters subject to the equality of an aggregated load to evade detection. A hybrid detection framework is developed to detect anomalous and malicious activities by incorporating our proposed grid sensor placement algorithm with observability analysis to increase the detection rate. Our simulations have shown that the network observability and detection accuracy can be improved by means of grid-placed sensor deployment.",
"title": ""
},
{
"docid": "3e805d6724dc400d681b3b42393d5ebe",
"text": "This paper introduces a framework for conducting and writing an effective literature review. The target audience for the framework includes information systems (IS) doctoral students, novice IS researchers, and other IS researchers who are constantly struggling with the development of an effective literature-based foundation for a proposed research. The proposed framework follows the systematic data processing approach comprised of three major stages: 1) inputs (literature gathering and screening), 2) processing (following Bloom’s Taxonomy), and 3) outputs (writing the literature review). This paper provides the rationale for developing a solid literature review including detailed instructions on how to conduct each stage of the process proposed. The paper concludes by providing arguments for the value of an effective literature review to IS research.",
"title": ""
},
{
"docid": "1d9361cffd8240f3b691c887def8e2f5",
"text": "Twenty seven essential oils, isolated from plants representing 11 families of Portuguese flora, were screened for their nematicidal activity against the pinewood nematode (PWN), Bursaphelenchus xylophilus. The essential oils were isolated by hydrodistillation and the volatiles by distillation-extraction, and both were analysed by GC and GC-MS. High nematicidal activity was achieved with essential oils from Chamaespartium tridentatum, Origanum vulgare, Satureja montana, Thymbra capitata, and Thymus caespititius. All of these essential oils had an estimated minimum inhibitory concentration ranging between 0.097 and 0.374 mg/ml and a lethal concentration necessary to kill 100% of the population (LC(100)) between 0.858 and 1.984 mg/ml. Good nematicidal activity was also obtained with the essential oil from Cymbopogon citratus. The dominant components of the effective oils were 1-octen-3-ol (9%), n-nonanal, and linalool (both 7%) in C. tridentatum, geranial (43%), neral (29%), and β-myrcene (25%) in C. citratus, carvacrol (36% and 39%), γ-terpinene (24% and 40%), and p-cymene (14% and 7%) in O. vulgare and S. montana, respectively, and carvacrol (75% and 65%, respectively) in T. capitata and T. caespititius. The other essential oils obtained from Portuguese flora yielded weak or no activity. Five essential oils with nematicidal activity against PWN are reported for the first time.",
"title": ""
},
{
"docid": "5f0157139bff33057625686b7081a0c8",
"text": "A novel MIC/MMIC compatible microstrip to waveguide transition for X band is presented. The transition has realized on novel low cost substrate and its main features are: wideband operation, low insertion loss and feeding without a balun directly by the microstrip line.",
"title": ""
},
{
"docid": "c85a26f1bccf3b28ca6a46c5312040e7",
"text": "This paper describes a novel compact design of a planar circularly polarized (CP) tag antenna for use in a ultrahigh frequency (UHF) radio frequency identification (RFID) system. Introducing the meander strip into the right-arm of the square-ring structure enables the measured half-power bandwidth of the proposed CP tag antenna to exceed 100 MHz (860–960 MHz), which includes the entire operating bandwidth of the global UHF RFID system. A 3-dB axial-ratio bandwidth of approximately 36 MHz (902–938 MHz) can be obtained, which is suitable for American (902–928 MHz), European (918–926 MHz), and Taiwanese UHF RFID (922–928 MHz) applications. Since the overall antenna dimensions are only <inline-formula> <tex-math notation=\"LaTeX\">$54\\times54$ </tex-math></inline-formula> mm<sup>2</sup>, the proposed tag antenna can be operated with a size that is 64% smaller than that of the tag antennas attached on the safety glass. With a bidirectional reading pattern, the measured reading distance is about 8.3 m. Favorable tag sensitivity is obtained across the desired frequency band.",
"title": ""
},
{
"docid": "efc341c0a3deb6604708b6db361bfba5",
"text": "In recent years, data analysis has become important with increasing data volume. Clustering, which groups objects according to their similarity, has an important role in data analysis. DBSCAN is one of the most effective and popular density-based clustering algorithm and has been successfully implemented in many areas. However, it is a challenging task to determine the input parameter values of DBSCAN algorithm which are neighborhood radius Eps and minimum number of points MinPts. The values of these parameters significantly affect clustering performance of the algorithm. In this study, we propose AE-DBSCAN algorithm which includes a new method to determine the value of neighborhood radius Eps automatically. The experimental evaluations showed that the proposed method outperformed the classical method.",
"title": ""
},
{
"docid": "ceb66016a57a936d33675756ee2e7eed",
"text": "Detecting small vehicles in aerial images is a difficult job that can be challenging even for humans. Rotating objects, low resolution, small inter-class variability and very large images comprising complicated backgrounds render the work of photo-interpreters tedious and wearisome. Unfortunately even the best classical detection pipelines like Ren et al. [2015] cannot be used off-the-shelf with good results because they were built to process object centric images from day-to-day life with multi-scale vertical objects. In this work we build on the Faster R-CNN approach to turn it into a detection framework that deals appropriately with the rotation equivariance inherent to any aerial image task. This new pipeline (Faster Rotation Equivariant Regions CNN) gives, without any bells and whistles, state-of-the-art results on one of the most challenging aerial imagery datasets: VeDAI Razakarivony and Jurie [2015] and give good results w.r.t. the baseline Faster R-CNN on two others: Munich Leitloff et al. [2014] and GoogleEarth Heitz and Koller [2008].",
"title": ""
},
{
"docid": "b1b6e670f21479956d2bbe281c6ff556",
"text": "Near real-time data from the MODIS satellite sensor was used to detect and trace a harmful algal bloom (HAB), or red tide, in SW Florida coastal waters from October to December 2004. MODIS fluorescence line height (FLH in W m 2 Am 1 sr ) data showed the highest correlation with near-concurrent in situ chlorophyll-a concentration (Chl in mg m ). For Chl ranging between 0.4 to 4 mg m 3 the ratio between MODIS FLH and in situ Chl is about 0.1 W m 2 Am 1 sr 1 per mg m 3 chlorophyll (Chl=1.255 (FLH 10), r =0.92, n =77). In contrast, the band-ratio chlorophyll product of either MODIS or SeaWiFS in this complex coastal environment provided false information. Errors in the satellite Chl data can be both negative and positive (3–15 times higher than in situ Chl) and these data are often inconsistent either spatially or temporally, due to interferences of other water constituents. The red tide that formed from November to December 2004 off SW Florida was revealed by MODIS FLH imagery, and was confirmed by field sampling to contain medium (10 to 10 cells L ) to high (>10 cells L ) concentrations of the toxic dinoflagellate Karenia brevis. The FLH imagery also showed that the bloom started in midOctober south of Charlotte Harbor, and that it developed and moved to the south and southwest in the subsequent weeks. Despite some artifacts in the data and uncertainty caused by factors such as unknown fluorescence efficiency, our results show that the MODIS FLH data provide an unprecedented tool for research and managers to study and monitor algal blooms in coastal environments. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "182c83e136dcc7f41c2d7a7a30321440",
"text": "Behavioral logs are traces of human behavior seen through the lenses of sensors that capture and record user activity. They include behavior ranging from low-level keystrokes to rich audio and video recordings. Traces of behavior have been gathered in psychology studies since the 1930s (Skinner, 1938 ), and with the advent of computerbased applications it became common practice to capture a variety of interaction behaviors and save them to log fi les for later analysis. In recent years, the rise of centralized, web-based computing has made it possible to capture human interactions with web services on a scale previously unimaginable. Largescale log data has enabled HCI researchers to observe how information diffuses through social networks in near real-time during crisis situations (Starbird & Palen, 2010 ), characterize how people revisit web pages over time (Adar, Teevan, & Dumais, 2008 ), and compare how different interfaces for supporting email organization infl uence initial uptake and sustained use (Dumais, Cutrell, Cadiz, Jancke, Sarin, & Robbins, 2003 ; Rodden & Leggett, 2010 ). In this chapter we provide an overview of behavioral log use in HCI. We highlight what can be learned from logs that capture people’s interactions with existing computer systems and from experiments that compare new, alternative systems. We describe how to design and analyze web experiments, and how to collect, clean and use log data responsibly. The goal of this chapter is to enable the reader to design log studies and to understand results from log studies that they read about. Understanding User Behavior Through Log Data and Analysis",
"title": ""
}
] | scidocsrr |
ce302b49c125828cb906ffec23da62d1 | The critical hitch angle for jackknife avoidance during slow backing up of vehicle – trailer systems | [
{
"docid": "0a793374864ce2a8a723423a4f74759b",
"text": "Trailer reversing is a problem frequently considered in the literature, usually with fairly complex non-linear control theory based approaches. In this paper, we present a simple method for stabilizing a tractor-trailer system to a trajectory based on the notion of controlling the hitch-angle of the trailer rather than the steering angle of the tractor. The method is intuitive, provably stable, and shown to be viable through various experimental results conducted on our test platform, the CSIRO autonomous tractor.",
"title": ""
}
] | [
{
"docid": "80ac2373b3a01ab0f1f2665f0e070aa4",
"text": "This paper presents an overview of the state of the art control strategies specifically designed to coordinate distributed energy storage (ES) systems in microgrids. Power networks are undergoing a transition from the traditional model of centralised generation towards a smart decentralised network of renewable sources and ES systems, organised into autonomous microgrids. ES systems can provide a range of services, particularly when distributed throughout the power network. The introduction of distributed ES represents a fundamental change for power networks, increasing the network control problem dimensionality and adding long time-scale dynamics associated with the storage systems’ state of charge levels. Managing microgrids with many small distributed ES systems requires new scalable control strategies that are robust to power network and communication network disturbances. This paper reviews the range of services distributed ES systems can provide, and the control challenges they introduce. The focus of this paper is a presentation of the latest decentralised, centralised and distributed multi-agent control strategies designed to coordinate distributed microgrid ES systems. Finally, multi-agent control with agents satisfying Wooldridge’s definition of intelligence is proposed as a promising direction for future research.",
"title": ""
},
{
"docid": "37e65ab2fc4d0a9ed5b8802f41a1a2a2",
"text": "This paper is based on a panel discussion held at the Artificial Intelligence in Medicine Europe (AIME) conference in Amsterdam, The Netherlands, in July 2007. It had been more than 15 years since Edward Shortliffe gave a talk at AIME in which he characterized artificial intelligence (AI) in medicine as being in its \"adolescence\" (Shortliffe EH. The adolescence of AI in medicine: will the field come of age in the '90s? Artificial Intelligence in Medicine 1993;5:93-106). In this article, the discussants reflect on medical AI research during the subsequent years and characterize the maturity and influence that has been achieved to date. Participants focus on their personal areas of expertise, ranging from clinical decision-making, reasoning under uncertainty, and knowledge representation to systems integration, translational bioinformatics, and cognitive issues in both the modeling of expertise and the creation of acceptable systems.",
"title": ""
},
{
"docid": "34461f38c51a270e2f3b0d8703474dfc",
"text": "Software vulnerabilities are the root cause of computer security problem. How people can quickly discover vulnerabilities existing in a certain software has always been the focus of information security field. This paper has done research on software vulnerability techniques, including static analysis, Fuzzing, penetration testing. Besides, the authors also take vulnerability discovery models as an example of software vulnerability analysis methods which go hand in hand with vulnerability discovery techniques. The ending part of the paper analyses the advantages and disadvantages of each technique introduced here and talks about the future direction of this field.",
"title": ""
},
{
"docid": "26a599c22c173f061b5d9579f90fd888",
"text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto",
"title": ""
},
{
"docid": "56245b600dd082439d2b1b2a2452a6b7",
"text": "The electric drive systems used in many industrial applications require higher performance, reliability, variable speed due to its ease of controllability. The speed control of DC motor is very crucial in applications where precision and protection are of essence. Purpose of a motor speed controller is to take a signal representing the required speed and to drive a motor at that speed. Microcontrollers can provide easy control of DC motor. Microcontroller based speed control system consist of electronic component, microcontroller and the LCD. In this paper, implementation of the ATmega8L microcontroller for speed control of DC motor fed by a DC chopper has been investigated. The chopper is driven by a high frequency PWM signal. Controlling the PWM duty cycle is equivalent to controlling the motor terminal voltage, which in turn adjusts directly the motor speed. This work is a practical one and high feasibility according to economic point of view and accuracy. In this work, development of hardware and software of the close loop dc motor speed control system have been explained and illustrated. The desired objective is to achieve a system with the constant speed at any load condition. That means motor will run at a fixed speed instead of varying with amount of load. KeywordsDC motor, Speed control, Microcontroller, ATmega8, PWM.",
"title": ""
},
{
"docid": "e08bc715d679ba0442883b4b0e481998",
"text": "Rheology, as a branch of physics, studies the deformation and flow of matter in response to an applied stress or strain. According to the materials’ behaviour, they can be classified as Newtonian or non-Newtonian (Steffe, 1996; Schramm, 2004). The most of the foodstuffs exhibit properties of non-Newtonian viscoelastic systems (Abang Zaidel et al., 2010). Among them, the dough can be considered as the most unique system from the point of material science. It is viscoelastic system which exhibits shear-thinning and thixotropic behaviour (Weipert, 1990). This behaviour is the consequence of dough complex structure in which starch granules (75-80%) are surrounded by three-dimensional protein (20-25%) network (Bloksma, 1990, as cited in Weipert, 2006). Wheat proteins are consisted of gluten proteins (80-85% of total wheat protein) which comprise of prolamins (in wheat gliadins) and glutelins (in wheat glutenins) and non gluten proteins (15-20% of the total wheat proteins) such as albumins and globulins (Veraverbeke & Delcour, 2002). Gluten complex is a viscoelastic protein responsible for dough structure formation. Among the cereal technologists, rheology is widely recognized as a valuable tool in quality assessment of flour. Hence, in the cereal scientific community, rheological measurements are generally employed throughout the whole processing chain in order to monitor the mechanical properties, molecular structure and composition of the material, to imitate materials’ behaviour during processing and to anticipate the quality of the final product (Dobraszczyk & Morgenstern, 2003). Rheology is particularly important technique in revealing the influence of flour constituents and additives on dough behaviour during breadmaking. There are many test methods available to measure rheological properties, which are commonly divided into empirical (descriptive, imitative) and fundamental (basic) (Scott Blair, 1958 as cited in Weipert, 1990). Although being criticized due to their shortcomings concerning inflexibility in defining the level of deforming force, usage of strong deformation forces, interpretation of results in relative non-SI units, large sample requirements and its impossibility to define rheological parameters such as stress, strain, modulus or viscosity (Weipert, 1990; Dobraszczyk & Morgenstern, 2003), empirical rheological measurements are still indispensable in the cereal quality laboratories. According to the empirical rheological parameters it is possible to determine the optimal flour quality for a particular purpose. The empirical techniques used for dough quality",
"title": ""
},
{
"docid": "a937f479b462758a089ed23cfa5a0099",
"text": "The paper outlines the development of a large vocabulary continuous speech recognition (LVCSR) system for the Indonesian language within the Asian speech translation (A-STAR) project. An overview of the A-STAR project and Indonesian language characteristics will be briefly described. We then focus on a discussion of the development of Indonesian LVCSR, including data resources issues, acoustic modeling, language modeling, the lexicon, and accuracy of recognition. There are three types of Indonesian data resources: daily news, telephone application, and BTEC tasks, which are used in this project. They are available in both text and speech forms. The Indonesian speech recognition engine was trained using the clean speech of both daily news and telephone application tasks. The optimum performance achieved on the BTEC task was 92.47% word accuracy. 1 A-STAR Project Overview The A-STAR project is an Asian consortium that is expected to advance the state-of-the-art in multilingual man-machine interfaces in the Asian region. This basic infrastructure will accelerate the development of large-scale spoken language corpora in Asia and also facilitate the development of related fundamental information communication technologies (ICT), such as multi-lingual speech translation, Figure 1: Outline of future speech-technology services connecting each area in the Asian region through network. multi-lingual speech transcription, and multi-lingual information retrieval. These fundamental technologies can be applied to the human-machine interfaces of various telecommunication devices and services connecting Asian countries through the network using standardized communication protocols as outlined in Fig. 1. They are expected to create digital opportunities, improve our digital capabilities, and eliminate the digital divide resulting from the differences in ICT levels in each area. The improvements to borderless communication in the Asian region are expected to result in many benefits in everyday life including tourism, business, education, and social security. The project was coordinated together by the Advanced Telecommunication Research (ATR) and the National Institute of Information and Communications Technology (NICT) Japan in cooperation with several research institutes in Asia, such as the National Laboratory of Pattern Recognition (NLPR) in China, the Electronics and Telecommunication Research Institute (ETRI) in Korea, the Agency for the Assessment and Application Technology (BPPT) in Indonesia, the National Electronics and Computer Technology Center (NECTEC) in Thailand, the Center for Development of Advanced Computing (CDAC) in India, the National Taiwan University (NTU) in Taiwan. Partners are still being sought for other languages in Asia. More details about the A-STAR project can be found in (Nakamura et al., 2007). 2 Indonesian Language Characteristic The Indonesian language, or so-called Bahasa Indonesia, is a unified language formed from hundreds of languages spoken throughout the Indonesian archipelago. Compared to other languages, which have a high density of native speakers, Indonesian is spoken as a mother tongue by only 7% of the population, and more than 195 million people speak it as a second language with varying degrees of proficiency. There are approximately 300 ethnic groups living throughout 17,508 islands, speaking 365 native languages or no less than 669 dialects (Tan, 2004). At home, people speak their own language, such as Javanese, Sundanese or Balinese, even though almost everybody has a good understanding of Indonesian as they learn it in school. Although the Indonesian language is infused with highly distinctive accents from different ethnic languages, there are many similarities in patterns across the archipelago. Modern Indonesian is derived from the literary of the Malay dialect. Thus, it is closely related to the Malay spoken in Malaysia, Singapore, Brunei, and some other areas. Unlike the Chinese language, it is not a tonal language. Compared with European languages, Indonesian has a strikingly small use of gendered words. Plurals are often expressed by means of word repetition. It is also a member of the agglutinative language family, meaning that it has a complex range of prefixes and suffixes, which are attached to base words. Consequently, a word can become very long. More details on Indonesian characteristics can be found in (Sakti et al., 2004). 3 Indonesian Phoneme Set The Indonesian phoneme set is defined based on Indonesian grammar described in (Alwi et al., 2003). A full phoneme set contains 33 phoneme symbols in total, which consists of 10 vowels (including diphthongs), 22 consonants, and one silent symbol. The vowel articulation pattern of the Indonesian language, which indicates the first two resonances of the vocal tract, F1 (height) and F2 (backness), is shown in Fig. 2.",
"title": ""
},
{
"docid": "4381ee2e578a640dda05e609ed7f6d53",
"text": "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.",
"title": ""
},
{
"docid": "e2b8dd31dad42e82509a8df6cf21df11",
"text": "Recent experiments indicate the need for revision of a model of spatial memory consisting of viewpoint-specific representations, egocentric spatial updating and a geometric module for reorientation. Instead, it appears that both egocentric and allocentric representations exist in parallel, and combine to support behavior according to the task. Current research indicates complementary roles for these representations, with increasing dependence on allocentric representations with the amount of movement between presentation and retrieval, the number of objects remembered, and the size, familiarity and intrinsic structure of the environment. Identifying the neuronal mechanisms and functional roles of each type of representation, and of their interactions, promises to provide a framework for investigation of the organization of human memory more generally.",
"title": ""
},
{
"docid": "3ed6df057a32b9dcf243b5ac367b4912",
"text": "This paper presents advancements in induction motor endring design to overcome mechanical limitations and extend the operating speed range and joint reliability of induction machines. A novel endring design met the challenging mechanical requirements of this high speed, high temperature, power dense application, without compromising electrical performance. Analysis is presented of the advanced endring design features including a non uniform cross section, hoop stress relief cuts, and an integrated joint boss, which reduced critical stress concentrations, allowing operation under a broad speed and temperature design range. A generalized treatment of this design approach is presented comparing the concept results to conventional design techniques. Additionally, a low temperature joining process of the bar/end ring connection is discussed that provides the required joint strength without compromising the mechanical strength of the age hardened parent metals. A description of a prototype 2 MW, 15,000 rpm flywheel motor generator embodying this technology is presented",
"title": ""
},
{
"docid": "b3fd58901706f7cb3ed653572e634c78",
"text": "This paper presents visual analysis of eye state and head pose (HP) for continuous monitoring of alertness of a vehicle driver. Most existing approaches to visual detection of nonalert driving patterns rely either on eye closure or head nodding angles to determine the driver drowsiness or distraction level. The proposed scheme uses visual features such as eye index (EI), pupil activity (PA), and HP to extract critical information on nonalertness of a vehicle driver. EI determines if the eye is open, half closed, or closed from the ratio of pupil height and eye height. PA measures the rate of deviation of the pupil center from the eye center over a time period. HP finds the amount of the driver's head movements by counting the number of video segments that involve a large deviation of three Euler angles of HP, i.e., nodding, shaking, and tilting, from its normal driving position. HP provides useful information on the lack of attention, particularly when the driver's eyes are not visible due to occlusion caused by large head movements. A support vector machine (SVM) classifies a sequence of video segments into alert or nonalert driving events. Experimental results show that the proposed scheme offers high classification accuracy with acceptably low errors and false alarms for people of various ethnicity and gender in real road driving conditions.",
"title": ""
},
{
"docid": "d16114259da9edf0022e2a3774c5acf0",
"text": "The multivesicular body (MVB) pathway is responsible for both the biosynthetic delivery of lysosomal hydrolases and the downregulation of numerous activated cell surface receptors which are degraded in the lysosome. We demonstrate that ubiquitination serves as a signal for sorting into the MVB pathway. In addition, we characterize a 350 kDa complex, ESCRT-I (composed of Vps23, Vps28, and Vps37), that recognizes ubiquitinated MVB cargo and whose function is required for sorting into MVB vesicles. This recognition event depends on a conserved UBC-like domain in Vps23. We propose that ESCRT-I represents a conserved component of the endosomal sorting machinery that functions in both yeast and mammalian cells to couple ubiquitin modification to protein sorting and receptor downregulation in the MVB pathway.",
"title": ""
},
{
"docid": "e6cba9e178f568c402be7b25c4f0777f",
"text": "This paper is a tutorial introduction to the Viterbi Algorithm, this is reinforced by an example use of the Viterbi Algorithm in the area of error correction in communications channels. Some extensions to the basic algorithm are also discussed briefly. Some of the many application areas where the Viterbi Algorithm has been used are considered, including it's use in communications, target tracking and pattern recognition problems. A proposal for further research into the use of the Viterbi Algorithm in Signature Verification is then presented, and is the area of present research at the moment.",
"title": ""
},
{
"docid": "0397514e0d4a87bd8b59d9b317f8c660",
"text": "Formula 1 motorsport is a platform for maximum race car driving performance resulting from high-tech developments in the area of lightweight materials and aerodynamic design. In order to ensure the driver’s safety in case of high-speed crashes, special impact structures are designed to absorb the race car’s kinetic energy and limit the decelerations acting on the human body. These energy absorbing structures are made of laminated composite sandwich materials like the whole monocoque chassis and have to meet defined crash test requirements specified by the FIA. This study covers the crash behaviour of the nose cone as the F1 racing car front impact structure. Finite element models for dynamic simulations with the explicit solver LS-DYNA are developed with the emphasis on the composite material modelling. Numerical results are compared to crash test data in terms of deceleration levels, absorbed energy and crushing mechanisms. The validation led to satisfying results and the overall conclusion that dynamic simulations with LS-DYNA can be a helpful tool in the design phase of an F1 racing car front impact structure.",
"title": ""
},
{
"docid": "03e1ede18dcc78409337faf265940a4d",
"text": "Epidermal thickness and its relationship to age, gender, skin type, pigmentation, blood content, smoking habits and body site is important in dermatologic research and was investigated in this study. Biopsies from three different body sites of 71 human volunteers were obtained, and thickness of the stratum corneum and cellular epidermis was measured microscopically using a preparation technique preventing tissue damage. Multiple regressions analysis was used to evaluate the effect of the various factors independently of each other. Mean (SD) thickness of the stratum corneum was 18.3 (4.9) microm at the dorsal aspect of the forearm, 11.0 (2.2) microm at the shoulder and 14.9 (3.4) microm at the buttock. Corresponding values for the cellular epidermis were 56.6 (11.5) microm, 70.3 (13.6) microm and 81.5 (15.7) microm, respectively. Body site largely explains the variation in epidermal thickness, but also a significant individual variation was observed. Thickness of the stratum corneum correlated positively to pigmentation (p = 0.0008) and negatively to the number of years of smoking (p < 0.0001). Thickness of the cellular epidermis correlated positively to blood content (P = 0.028) and was greater in males than in females (P < 0.0001). Epidermal thickness was not correlated to age or skin type.",
"title": ""
},
{
"docid": "910c8ca022db7b806565e1c16c4cfb6a",
"text": "Three di¡erent understandings of causation, each importantly shaped by the work of statisticians, are examined from the point of view of their value to sociologists: causation as robust dependence, causation as consequential manipulation, and causation as generative process. The last is favoured as the basis for causal analysis in sociology. It allows the respective roles of statistics and theory to be clari¢ed and is appropriate to sociology as a largely non-experimental social science in which the concept of action is central.",
"title": ""
},
{
"docid": "97ec7149cbaedc6af3a26030067e2dba",
"text": "Skype is a peer-to-peer VoIP client developed by KaZaa in 2003. Skype claims that it can work almost seamlessly across NATs and firewalls and has better voice quality than the MSN and Yahoo IM applications. It encrypts calls end-to-end, and stores user information in a decentralized fashion. Skype also supports instant messaging and conferencing. This report analyzes key Skype functions such as login, NAT and firewall traversal, call establishment, media transfer, codecs, and conferencing under three different network setups. Analysis is performed by careful study of Skype network traffic.",
"title": ""
},
{
"docid": "2316e37df8796758c86881aaeed51636",
"text": "Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research.",
"title": ""
},
{
"docid": "791314f5cee09fc8e27c236018a0927f",
"text": "© The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creat iveco mmons .org/ publi cdoma in/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Oral presentations",
"title": ""
},
{
"docid": "7d7ea6239106f614f892701e527122e2",
"text": "The purpose of this study was to investigate the effects of aromatherapy on the anxiety, sleep, and blood pressure (BP) of percutaneous coronary intervention (PCI) patients in an intensive care unit (ICU). Fifty-six patients with PCI in ICU were evenly allocated to either the aromatherapy or conventional nursing care. Aromatherapy essential oils were blended with lavender, roman chamomile, and neroli with a 6 : 2 : 0.5 ratio. Participants received 10 times treatment before PCI, and the same essential oils were inhaled another 10 times after PCI. Outcome measures patients' state anxiety, sleeping quality, and BP. An aromatherapy group showed significantly low anxiety (t = 5.99, P < .001) and improving sleep quality (t = -3.65, P = .001) compared with conventional nursing intervention. The systolic BP of both groups did not show a significant difference by time or in a group-by-time interaction; however, a significant difference was observed between groups (F = 4.63, P = .036). The diastolic BP did not show any significant difference by time or by a group-by-time interaction; however, a significant difference was observed between groups (F = 6.93, P = .011). In conclusion, the aromatherapy effectively reduced the anxiety levels and increased the sleep quality of PCI patients admitted to the ICU. Aromatherapy may be used as an independent nursing intervention for reducing the anxiety levels and improving the sleep quality of PCI patients.",
"title": ""
}
] | scidocsrr |
b3560ff550f50e2f79dae2a24428fcbd | Energy-Efficient Indoor Localization of Smart Hand-Held Devices Using Bluetooth | [
{
"docid": "4c7d66d767c9747fdd167f1be793d344",
"text": "In this paper, we introduce a new approach to location estimation where, instead of locating a single client, we simultaneously locate a set of wireless clients. We present a Bayesian hierarchical model for indoor location estimation in wireless networks. We demonstrate that our model achieves accuracy that is similar to other published models and algorithms. By harnessing prior knowledge, our model eliminates the requirement for training data as compared with existing approaches, thereby introducing the notion of a fully adaptive zero profiling approach to location estimation.",
"title": ""
}
] | [
{
"docid": "58fe53f045228772b3a04dc0de095970",
"text": "Heterogeneous systems, that marry CPUs and GPUs together in a range of configurations, are quickly becoming the design paradigm for today's platforms because of their impressive parallel processing capabilities. However, in many existing heterogeneous systems, the GPU is only treated as an accelerator by the CPU, working as a slave to the CPU master. But recently we are starting to see the introduction of a new class of devices and changes to the system runtime model, which enable accelerators to be treated as first-class computing devices. To support programmability and efficiency of heterogeneous programming, the HSA foundation introduced the Heterogeneous System Architecture (HSA), which defines a platform and runtime architecture that provides rich support for OpenCL 2.0 features including shared virtual memory, dynamic parallelism, and improved atomic operations. In this paper, we provide the first comprehensive study of OpenCL 2.0 and HSA 1.0 execution, considering OpenCL 1.2 as the baseline. For workloads, we develop a suite of OpenCL micro-benchmarks designed to highlight the features of these emerging standards and also utilize real-world applications to better understand their impact at an application level. To fully exercise the new features provided by the HSA model, we experiment with a producer-consumer algorithm and persistent kernels. We find that by using HSA signals, we can remove 92% of the overhead due to synchronous kernel launches. In our real-world applications, the OpenCL 2.0 runtime achieves up to a 1.2X speedup, while the HSA 1.0 runtime achieves a 2.7X speedup over OpenCL 1.2.",
"title": ""
},
{
"docid": "16be435a946f8ff5d8d084f77373a6f3",
"text": "Answer selection is a core component in any question-answering systems. It aims to select correct answer sentences for a given question from a pool of candidate sentences. In recent years, many deep learning methods have been proposed and shown excellent results for this task. However, these methods typically require extensive parameter (and hyper-parameter) tuning, which gives rise to efficiency issues for large-scale datasets, and potentially makes them less portable across new datasets and domains (as re-tuning is usually required). In this paper, we propose an extremely efficient hybrid model (FastHybrid) that tackles the problem from both an accuracy and scalability point of view. FastHybrid is a light-weight model that requires little tuning and adaptation across different domains. It combines a fast deep model (which will be introduced in the method section) with an initial information retrieval model to effectively and efficiently handle answer selection. We introduce a new efficient attention mechanism in the hybrid model and demonstrate its effectiveness on several QA datasets. Experimental results show that although the hybrid uses no training data, its accuracy is often on-par with supervised deep learning techniques, while significantly reducing training and tuning costs across different domains.",
"title": ""
},
{
"docid": "b6ab7ac8029950f85d412b90963e679d",
"text": "Adaptive traffic signal control system is needed to avoid traffic congestion that has many disadvantages. This paper presents an adaptive traffic signal control system using camera as an input sensor that providing real-time traffic data. Principal Component Analysis (PCA) is used to analyze and to classify object on video frame for detecting vehicles. Distributed Constraint Satisfaction Problem (DCSP) method determine the duration of each traffic signal, based on counted number of vehicles at each lane. The system is implemented in embedded systems using BeagleBoard™.",
"title": ""
},
{
"docid": "6c3be94fe73ef79d711ef5f8b9c789df",
"text": "• Belief update based on m last rewards • Gaussian belief model instead of Beta • Limited lookahead to h steps and a myopic function in the horizon. • Noisy rewards Motivation: Correct sequential decision-making is critical for life success, and optimal approaches require signi!cant computational look ahead. However, simple models seem to explain people’s behavior. Questions: (1) Why we seem so simple compared to a rational agent? (2) What is the built-in model that we use to sequentially choose between courses of actions?",
"title": ""
},
{
"docid": "5454fbb1a924f3360a338c11a88bea89",
"text": "PURPOSE OF REVIEW\nThis review describes the most common motor neuron disease, ALS. It discusses the diagnosis and evaluation of ALS and the current understanding of its pathophysiology, including new genetic underpinnings of the disease. This article also covers other motor neuron diseases, reviews how to distinguish them from ALS, and discusses their pathophysiology.\n\n\nRECENT FINDINGS\nIn this article, the spectrum of cognitive involvement in ALS, new concepts about protein synthesis pathology in the etiology of ALS, and new genetic associations will be covered. This concept has changed over the past 3 to 4 years with the discovery of new genes and genetic processes that may trigger the disease. As of 2014, two-thirds of familial ALS and 10% of sporadic ALS can be explained by genetics. TAR DNA binding protein 43 kDa (TDP-43), for instance, has been shown to cause frontotemporal dementia as well as some cases of familial ALS, and is associated with frontotemporal dysfunction in ALS.\n\n\nSUMMARY\nThe anterior horn cells control all voluntary movement: motor activity, respiratory, speech, and swallowing functions are dependent upon signals from the anterior horn cells. Diseases that damage the anterior horn cells, therefore, have a profound impact. Symptoms of anterior horn cell loss (weakness, falling, choking) lead patients to seek medical attention. Neurologists are the most likely practitioners to recognize and diagnose damage or loss of anterior horn cells. ALS, the prototypical motor neuron disease, demonstrates the impact of this class of disorders. ALS and other motor neuron diseases can represent diagnostic challenges. Neurologists are often called upon to serve as a \"medical home\" for these patients: coordinating care, arranging for durable medical equipment, and leading discussions about end-of-life care with patients and caregivers. It is important for neurologists to be able to identify motor neuron diseases and to evaluate and treat patients affected by them.",
"title": ""
},
{
"docid": "d2b27ab3eb0aa572fdf8f8e3de6ae952",
"text": "Both industry and academia have extensively investigated hardware accelerations. To address the demands in increasing computational capability and memory requirement, in this work, we propose the structured weight matrices (SWM)-based compression technique for both Field Programmable Gate Array (FPGA) and application-specific integrated circuit (ASIC) implementations. In the algorithm part, the SWM-based framework adopts block-circulant matrices to achieve a fine-grained tradeoff between accuracy and compression ratio. The SWM-based technique can reduce computational complexity from O(n2) to O(nlog n) and storage complexity from O(n2) to O(n) for each layer and both training and inference phases. For FPGA implementations on deep convolutional neural networks (DCNNs), we achieve at least 152X and 72X improvement in performance and energy efficiency, respectively using the SWM-based framework, compared with the baseline of IBM TrueNorth processor under same accuracy constraints using the data set of MNIST, SVHN, and CIFAR-10. For FPGA implementations on long short term memory (LSTM) networks, the proposed SWM-based LSTM can achieve up to 21X enhancement in performance and 33.5X gains in energy efficiency compared with the ESE accelerator. For ASIC implementations, the proposed SWM-based ASIC design exhibits impressive advantages in terms of power, throughput, and energy efficiency. Experimental results indicate that this method is greatly suitable for applying DNNs onto both FPGAs and mobile/IoT devices.",
"title": ""
},
{
"docid": "2efd26fc1e584aa5f70bdf9d24e5c2cd",
"text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.",
"title": ""
},
{
"docid": "fef4383a5a06687636ba4001ab0e510c",
"text": "In this paper, a depth camera-based novel approach for human activity recognition is presented using robust depth silhouettes context features and advanced Hidden Markov Models (HMMs). During HAR framework, at first, depth maps are processed to identify human silhouettes from noisy background by considering frame differentiation constraints of human body motion and compute depth silhouette area for each activity to track human movements in a scene. From the depth silhouettes context features, temporal frames information are computed for intensity differentiation measurements, depth history features are used to store gradient orientation change in overall activity sequence and motion difference features are extracted for regional motion identification. Then, these features are processed by Principal component analysis for dimension reduction and kmean clustering for code generation to make better activity representation. Finally, we proposed a new way to model, train and recognize different activities using advanced HMM. Each activity has been chosen with the highest likelihood value. Experimental results show superior recognition rate, resulting up to the mean recognition of 57.69% over the state of the art methods for fifteen daily routine activities using IM-Daily Depth Activity dataset. In addition, MSRAction3D dataset also showed some promising results.",
"title": ""
},
{
"docid": "7a37df81ad70697549e6da33384b4f19",
"text": "Water scarcity is now one of the major global crises, which has affected many aspects of human health, industrial development and ecosystem stability. To overcome this issue, water desalination has been employed. It is a process to remove salt and other minerals from saline water, and it covers a variety of approaches from traditional distillation to the well-established reverse osmosis. Although current water desalination methods can effectively provide fresh water, they are becoming increasingly controversial due to their adverse environmental impacts including high energy intensity and highly concentrated brine waste. For millions of years, microorganisms, the masters of adaptation, have survived on Earth without the excessive use of energy and resources or compromising their ambient environment. This has encouraged scientists to study the possibility of using biological processes for seawater desalination and the field has been exponentially growing ever since. Here, the term biodesalination is offered to cover all of the techniques which have their roots in biology for producing fresh water from saline solution. In addition to reviewing and categorizing biodesalination processes for the first time, this review also reveals unexplored research areas in biodesalination having potential to be used in water treatment.",
"title": ""
},
{
"docid": "7f47a4b5152acf7e38d5c39add680f9d",
"text": "unit of computation and a processor a piece of physical hardware In addition to reading to and writing from local memory a process can send and receive messages by making calls to a library of message passing routines The coordinated exchange of messages has the e ect of synchronizing processes This can be achieved by the synchronous exchange of messages in which the sending operation does not terminate until the receive operation has begun A di erent form of synchronization occurs when a message is sent asynchronously but the receiving process must wait or block until the data arrives Processes can be mapped to physical processors in various ways the mapping employed does not a ect the semantics of a program In particular multiple processes may be mapped to a single processor The message passing model provides a mechanism for talking about locality data contained in the local memory of a process are close and other data are remote We now examine some other properties of the message passing programming model performance mapping independence and modularity",
"title": ""
},
{
"docid": "a33e8a616955971014ceea9da1e8fcbe",
"text": "Highlights Auditory middle and late latency responses can be recorded reliably from ear-EEG.For sources close to the ear, ear-EEG has the same signal-to-noise-ratio as scalp.Ear-EEG is an excellent match for power spectrum-based analysis. A method for measuring electroencephalograms (EEG) from the outer ear, so-called ear-EEG, has recently been proposed. The method could potentially enable robust recording of EEG in natural environments. The objective of this study was to substantiate the ear-EEG method by using a larger population of subjects and several paradigms. For rigor, we considered simultaneous scalp and ear-EEG recordings with common reference. More precisely, 32 conventional scalp electrodes and 12 ear electrodes allowed a thorough comparison between conventional and ear electrodes, testing several different placements of references. The paradigms probed auditory onset response, mismatch negativity, auditory steady-state response and alpha power attenuation. By comparing event related potential (ERP) waveforms from the mismatch response paradigm, the signal measured from the ear electrodes was found to reflect the same cortical activity as that from nearby scalp electrodes. It was also found that referencing the ear-EEG electrodes to another within-ear electrode affects the time-domain recorded waveform (relative to scalp recordings), but not the timing of individual components. It was furthermore found that auditory steady-state responses and alpha-band modulation were measured reliably with the ear-EEG modality. Finally, our findings showed that the auditory mismatch response was difficult to monitor with the ear-EEG. We conclude that ear-EEG yields similar performance as conventional EEG for spectrogram-based analysis, similar timing of ERP components, and equal signal strength for sources close to the ear. Ear-EEG can reliably measure activity from regions of the cortex which are located close to the ears, especially in paradigms employing frequency-domain analyses.",
"title": ""
},
{
"docid": "4f1070b988605290c1588918a716cef2",
"text": "The aim of this paper was to predict the static bending modulus of elasticity (MOES) and modulus of rupture (MOR) of Scots pine (Pinus sylvestris L.) wood using three nondestructive techniques. The mean values of the dynamic modulus of elasticity based on flexural vibration (MOEF), longitudinal vibration (MOELV), and indirect ultrasonic (MOEUS) were 13.8, 22.3, and 30.9 % higher than the static modulus of elasticity (MOES), respectively. The reduction of this difference, taking into account the shear deflection effect in the output values for static bending modulus of elasticity, was also discussed in this study. The three dynamic moduli of elasticity correlated well with the static MOES and MOR; correlation coefficients ranged between 0.68 and 0.96. The correlation coefficients between the dynamic moduli and MOES were higher than those between the dynamic moduli and MOR. The highest correlation between the dynamic moduli and static bending properties was obtained by the flexural vibration technique in comparison with longitudinal vibration and indirect ultrasonic techniques. Results showed that there was no obvious relationship between the density and the acoustic wave velocity that was obtained from the longitudinal vibration and ultrasonic techniques.",
"title": ""
},
{
"docid": "6921cd9c2174ca96ec0061ae2dd881eb",
"text": "Modern Massively Multiplayer Online Role-Playing Games (MMORPGs) provide lifelike virtual environments in which players can conduct a variety of activities including combat, trade, and chat with other players. While the game world and the available actions therein are inspired by their offline counterparts, the games' popularity and dedicated fan base are testaments to the allure of novel social interactions granted to people by allowing them an alternative life as a new character and persona. In this paper we investigate the phenomenon of \"gender swapping,\" which refers to players choosing avatars of genders opposite to their natural ones. We report the behavioral patterns observed in players of Fairyland Online, a globally serviced MMORPG, during social interactions when playing as in-game avatars of their own real gender or gender-swapped. We also discuss the effect of gender role and self-image in virtual social situations and the potential of our study for improving MMORPG quality and detecting online identity frauds.",
"title": ""
},
{
"docid": "44e5c86afbe3814ad718aa27880941c4",
"text": "This paper introduces genetic algorithms (GA) as a complete entity, in which knowledge of this emerging technology can be integrated together to form the framework of a design tool for industrial engineers. An attempt has also been made to explain “why’’ and “when” GA should be used as an optimization tool.",
"title": ""
},
{
"docid": "93a39df6ee080e359f50af46d02cdb71",
"text": "Mobile edge computing (MEC) providing information technology and cloud-computing capabilities within the radio access network is an emerging technique in fifth-generation networks. MEC can extend the computational capacity of smart mobile devices (SMDs) and economize SMDs’ energy consumption by migrating the computation-intensive task to the MEC server. In this paper, we consider a multi-mobile-users MEC system, where multiple SMDs ask for computation offloading to a MEC server. In order to minimize the energy consumption on SMDs, we jointly optimize the offloading selection, radio resource allocation, and computational resource allocation coordinately. We formulate the energy consumption minimization problem as a mixed interger nonlinear programming (MINLP) problem, which is subject to specific application latency constraints. In order to solve the problem, we propose a reformulation-linearization-technique-based Branch-and-Bound (RLTBB) method, which can obtain the optimal result or a suboptimal result by setting the solving accuracy. Considering the complexity of RTLBB cannot be guaranteed, we further design a Gini coefficient-based greedy heuristic (GCGH) to solve the MINLP problem in polynomial complexity by degrading the MINLP problem into the convex problem. Many simulation results demonstrate the energy saving enhancements of RLTBB and GCGH.",
"title": ""
},
{
"docid": "28352c478552728dddf09a2486f6c63c",
"text": "Motion blur due to camera motion can significantly degrade the quality of an image. Since the path of the camera motion can be arbitrary, deblurring of motion blurred images is a hard problem. Previous methods to deal with this problem have included blind restoration of motion blurred images, optical correction using stabilized lenses, and special CMOS sensors that limit the exposure time in the presence of motion. In this paper, we exploit the fundamental trade off between spatial resolution and temporal resolution to construct a hybrid camera that can measure its own motion during image integration. The acquired motion information is used to compute a point spread function (PSF) that represents the path of the camera during integration. This PSF is then used to deblur the image. To verify the feasibility of hybrid imaging for motion deblurring, we have implemented a prototype hybrid camera. This prototype system was evaluated in different indoor and outdoor scenes using long exposures and complex camera motion paths. The results show that, with minimal resources, hybrid imaging outperforms previous approaches to the motion blur problem. We conclude with a brief discussion on how our ideas can be extended beyond the case of global camera motion to the case where individual objects in the scene move with different velocities.",
"title": ""
},
{
"docid": "c784bfbd522bb4c9908c3f90a31199fe",
"text": "Vedolizumab (VDZ) inhibits α4β7 integrins and is used to target intestinal immune responses in patients with inflammatory bowel disease, which is considered to be relatively safe. Here we report on a fatal complication following VDZ administration. A 64-year-old female patient with ulcerative colitis (UC) refractory to tumor necrosis factor inhibitors was treated with VDZ. One week after the second VDZ infusion, she was admitted to hospital with severe diarrhea and systemic inflammatory response syndrome (SIRS). Blood stream infections were ruled out, and endoscopy revealed extensive ulcerations of the small intestine covered with pseudomembranes, reminiscent of invasive candidiasis or mesenteric ischemia. Histology confirmed subtotal destruction of small intestinal epithelia and colonization with Candida. Moreover, small mesenteric vessels were occluded by hyaline thrombi, likely as a result of SIRS, while perfusion of large mesenteric vessels was not compromised. Beta-D-glucan concentrations were highly elevated, and antimycotic therapy was initiated for suspected invasive candidiasis but did not result in any clinical benefit. Given the non-responsiveness to anti-infective therapies, an autoimmune phenomenon was suspected and immunosuppressive therapy was escalated. However, the patient eventually died from multi-organ failure. This case should raise the awareness for rare but severe complications related to immunosuppressive therapy, particularly in high risk patients.",
"title": ""
},
{
"docid": "88e582927c4e4018cb4071eeeb6feff4",
"text": "While previous studies have correlated the Dark Triad traits (i.e., narcissism, psychopathy, and Machiavellianism) with a preference for short-term relationships, little research has addressed possible correlations with short-term relationship sub-types. In this online study using Amazon’s Mechanical Turk system (N = 210) we investigated the manner in which scores on the Dark Triad relate to the selection of different mating environments using a budget-allocation task. Overall, the Dark Triad were positively correlated with preferences for short-term relationships and negatively correlated with preferences for a long-term relationship. Specifically, narcissism was uniquely correlated with preferences for one-night stands and friends-with-benefits and psychopathy was uniquely correlated with preferences for bootycall relationships. Both narcissism and psychopathy were negatively correlated with preferences for serious romantic relationships. In mediation analyses, psychopathy partially mediated the sex difference in preferences for booty-call relationships and narcissism partially mediated the sex difference in preferences for one-night stands. In addition, the sex difference in preference for serious romantic relationships was partially mediated by both narcissism and psychopathy. It appears the Dark Triad traits facilitate the adoption of specific mating environments providing fit with people’s personality traits. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7ce79a08969af50c1712f0e291dd026c",
"text": "Collaborative filtering (CF) is valuable in e-commerce, and for direct recommendations for music, movies, news etc. But today's systems have several disadvantages, including privacy risks. As we move toward ubiquitous computing, there is a great potential for individuals to share all kinds of information about places and things to do, see and buy, but the privacy risks are severe. In this paper we describe a new method for collaborative filtering which protects the privacy of individual data. The method is based on a probabilistic factor analysis model. Privacy protection is provided by a peer-to-peer protocol which is described elsewhere, but outlined in this paper. The factor analysis approach handles missing data without requiring default values for them. We give several experiments that suggest that this is most accurate method for CF to date. The new algorithm has other advantages in speed and storage over previous algorithms. Finally, we suggest applications of the approach to other kinds of statistical analyses of survey or questionaire data.",
"title": ""
},
{
"docid": "9c1e518c80dfbf201291923c9c55f1fd",
"text": "Computation underlies the organization of cells into higher-order structures, for example during development or the spatial association of bacteria in a biofilm. Each cell performs a simple computational operation, but when combined with cell–cell communication, intricate patterns emerge. Here we study this process by combining a simple genetic circuit with quorum sensing to produce more complex computations in space. We construct a simple NOR logic gate in Escherichia coli by arranging two tandem promoters that function as inputs to drive the transcription of a repressor. The repressor inactivates a promoter that serves as the output. Individual colonies of E. coli carry the same NOR gate, but the inputs and outputs are wired to different orthogonal quorum-sensing ‘sender’ and ‘receiver’ devices. The quorum molecules form the wires between gates. By arranging the colonies in different spatial configurations, all possible two-input gates are produced, including the difficult XOR and EQUALS functions. The response is strong and robust, with 5- to >300-fold changes between the ‘on’ and ‘off’ states. This work helps elucidate the design rules by which simple logic can be harnessed to produce diverse and complex calculations by rewiring communication between cells.",
"title": ""
}
] | scidocsrr |
e9186d6222a2baf349f8ae3316689fdb | TWO What Does It Mean to be Biased : Motivated Reasoning and Rationality | [
{
"docid": "6103a365705a6083e40bb0ca27f6ca78",
"text": "Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.",
"title": ""
}
] | [
{
"docid": "5b01c2e7bba6ab1abdda9b1a23568d2a",
"text": "First, we theoretically analyze the MMD-based estimates. Our analysis establishes that, under some mild conditions, the estimate is statistically consistent. More importantly, it provides an upper bound on the error in the estimate in terms of intuitive geometric quantities like class separation and data spread. Next, we use the insights obtained from the theoretical analysis, to propose a novel convex formulation that automatically learns the kernel to be employed in the MMD-based estimation. We design an efficient cutting plane algorithm for solving this formulation. Finally, we empirically compare our estimator with several existing methods, and show significantly improved performance under varying datasets, class ratios, and training sizes.",
"title": ""
},
{
"docid": "e0c52b0fdf2d67bca4687b8060565288",
"text": "Large graph databases are commonly collected and analyzed in numerous domains. For reasons related to either space efficiency or for privacy protection (e.g., in the case of social network graphs), it sometimes makes sense to replace the original graph with a summary, which removes certain details about the original graph topology. However, this summarization process leaves the database owner with the challenge of processing queries that are expressed in terms of the original graph, but are answered using the summary. In this paper, we propose a formal semantics for answering queries on summaries of graph structures. At its core, our formulation is based on a random worlds model. We show that important graph-structure queries (e.g., adjacency, degree, and eigenvector centrality) can be answered efficiently and in closed form using these semantics. Further, based on this approach to query answering, we formulate three novel graph partitioning/compression problems. We develop algorithms for finding a graph summary that least affects the accuracy of query results, and we evaluate our proposed algorithms using both real and synthetic data.",
"title": ""
},
{
"docid": "dff09daea034a765b858bc6a457cb6a7",
"text": "We study the problem of automatically and efficiently generating itineraries for users who are on vacation. We focus on the common case, wherein the trip duration is more than a single day. Previous efficient algorithms based on greedy heuristics suffer from two problems. First, the itineraries are often unbalanced, with excellent days visiting top attractions followed by days of exclusively lower-quality alternatives. Second, the trips often re-visit neighborhoods repeatedly in order to cover increasingly low-tier points of interest. Our primary technical contribution is an algorithm that addresses both these problems by maximizing the quality of the worst day. We give theoretical results showing that this algorithm»s competitive factor is within a factor two of the guarantee of the best available algorithm for a single day, across many variations of the problem. We also give detailed empirical evaluations using two distinct datasets:(a) anonymized Google historical visit data and(b) Foursquare public check-in data. We show first that the overall utility of our itineraries is almost identical to that of algorithms specifically designed to maximize total utility, while the utility of the worst day of our itineraries is roughly twice that obtained from other approaches. We then turn to evaluation based on human raters who score our itineraries only slightly below the itineraries created by human travel experts with deep knowledge of the area.",
"title": ""
},
{
"docid": "911ca70346689d6ba5fd01b1bc964dbe",
"text": "We present a novel texture compression scheme, called iPACKMAN, targeted for hardware implementation. In terms of image quality, it outperforms the previous de facto standard texture compression algorithms in the majority of all cases that we have tested. Our new algorithm is an extension of the PACKMAN texture compression system, and while it is a bit more complex than PACKMAN, it is still very low in terms of hardware complexity.",
"title": ""
},
{
"docid": "f2daa3fd822be73e3663520cc6afe741",
"text": "Low health literacy (LHL) remains a formidable barrier to improving health care quality and outcomes. Given the lack of precision of single demographic characteristics to predict health literacy, and the administrative burden and inability of existing health literacy measures to estimate health literacy at a population level, LHL is largely unaddressed in public health and clinical practice. To help overcome these limitations, we developed two models to estimate health literacy. We analyzed data from the 2003 National Assessment of Adult Literacy (NAAL), using linear regression to predict mean health literacy scores and probit regression to predict the probability of an individual having ‘above basic’ proficiency. Predictors included gender, age, race/ethnicity, educational attainment, poverty status, marital status, language spoken in the home, metropolitan statistical area (MSA) and length of time in U.S. All variables except MSA were statistically significant, with lower educational attainment being the strongest predictor. Our linear regression model and the probit model accounted for about 30% and 21% of the variance in health literacy scores, respectively, nearly twice as much as the variance accounted for by either education or poverty alone. Multivariable models permit a more accurate estimation of health literacy than single predictors. Further, such models can be applied to readily available administrative or census data to produce estimates of average health literacy and identify communities that would benefit most from appropriate, targeted interventions in the clinical setting to address poor quality care and outcomes related to LHL.",
"title": ""
},
{
"docid": "cc9de768281e58749cd073d25a97d39c",
"text": "The Dynamic Adaptive Streaming over HTTP (referred as MPEG DASH) standard is designed to provide high quality of media content over the Internet delivered from conventional HTTP web servers. The visual content, divided into a sequence of segments, is made available at a number of different bitrates so that an MPEG DASH client can automatically select the next segment to download and play back based on current network conditions. The task of transcoding media content to different qualities and bitrates is computationally expensive, especially in the context of large-scale video hosting systems. Therefore, it is preferably executed in a powerful cloud environment, rather than on the source computer (which may be a mobile device with limited memory, CPU speed and battery life). In order to support the live distribution of media events and to provide a satisfactory user experience, the overall processing delay of videos should be kept to a minimum. In this paper, we propose a novel dynamic scheduling methodology on video transcoding for MPEG DASH in a cloud environment, which can be adapted to different applications. The designed scheduler monitors the workload on each processor in the cloud environment and selects the fastest processors to run high-priority jobs. It also adjusts the video transcoding mode (VTM) according to the system load. Experimental results show that the proposed scheduler performs well in terms of the video completion time, system load balance, and video playback smoothness.",
"title": ""
},
{
"docid": "7eba71bb191a31bd87cd9d2678a7b860",
"text": "In winter, rainbow smelt (Osmerus mordax) accumulate glycerol and produce an antifreeze protein (AFP), which both contribute to freeze resistance. The role of differential gene expression in the seasonal pattern of these adaptations was investigated. First, cDNAs encoding smelt and Atlantic salmon (Salmo salar) phosphoenolpyruvate carboxykinase (PEPCK) and smelt glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were cloned so that all sequences required for expression analysis would be available. Using quantitative PCR, expression of beta actin in rainbow smelt liver was compared with that of GAPDH in order to determine its validity as a reference gene. Then, levels of glycerol-3-phosphate dehydrogenase (GPDH), PEPCK, and AFP relative to beta actin were measured in smelt liver over a fall-winter-spring interval. Levels of GPDH mRNA increased in the fall just before plasma glycerol accumulation, implying a driving role in glycerol synthesis. GPDH mRNA levels then declined during winter, well in advance of serum glycerol, suggesting the possibility of GPDH enzyme or glycerol conservation in smelt during the winter months. PEPCK mRNA levels rose in parallel with serum glycerol in the fall, consistent with an increasing requirement for amino acids as metabolic precursors, remained elevated for much of the winter, and then declined in advance of the decline in plasma glycerol. AFP mRNA was elevated at the onset of fall sampling in October and remained elevated until April, implying separate regulation from GPDH and PEPCK. Thus, winter freezing point depression in smelt appears to result from a seasonal cycle of GPDH gene expression, with an ensuing increase in the expression of PEPCK, and a similar but independent cycle of AFP gene expression.",
"title": ""
},
{
"docid": "4cd1eeb516d602390703b66d3201a9dc",
"text": "A thorough understanding of the orbit, structures within it, and complex spatial relationships among these structures bears relevance in a variety of neurosurgical cases. We describe the 3-dimensional surgical anatomy of the orbit and fragile and complex network of neurovascular architectures, flanked by a series of muscular and glandular structures, found within the orbital dura.",
"title": ""
},
{
"docid": "4a837ccd9e392f8c7682446d9a3a3743",
"text": "This paper investigates the applicability of Genetic Programming type systems to dynamic game environments. Grammatical Evolution was used to evolve Behaviour Trees, in order to create controllers for the Mario AI Benchmark. The results obtained reinforce the applicability of evolutionary programming systems to the development of artificial intelligence in games, and in dynamic systems in general, illustrating their viability as an alternative to more standard AI techniques.",
"title": ""
},
{
"docid": "666137f1b598a25269357d6926c0b421",
"text": "representation techniques. T he World Wide Web is possible because a set of widely established standards guarantees interoperability at various levels. Until now, the Web has been designed for direct human processing, but the next-generation Web, which Tim Berners-Lee and others call the “Semantic Web,” aims at machine-processible information.1 The Semantic Web will enable intelligent services—such as information brokers, search agents, and information filters—which offer greater functionality and interoperability than current stand-alone services. The Semantic Web will only be possible once further levels of interoperability have been established. Standards must be defined not only for the syntactic form of documents, but also for their semantic content. Notable among recent W3C standardization efforts are XML/XML schema and RDF/RDF schema, which facilitate semantic interoperability. In this article, we explain the role of ontologies in the architecture of the Semantic Web. We then briefly summarize key elements of XML and RDF, showing why using XML as a tool for semantic interoperability will be ineffective in the long run. We argue that a further representation and inference layer is needed on top of the Web’s current layers, and to establish such a layer, we propose a general method for encoding ontology representation languages into RDF/RDF schema. We illustrate the extension method by applying it to Ontology Interchange Language (OIL), an ontology representation and inference language.2",
"title": ""
},
{
"docid": "bc28f28d21605990854ac9649d244413",
"text": "Mobile devices can provide people with contextual information. This information may benefit a primary activity, assuming it is easily accessible. In this paper, we present DisplaySkin, a pose-aware device with a flexible display circling the wrist. DisplaySkin creates a kinematic model of a user's arm and uses it to place information in view, independent of body pose. In doing so, DisplaySkin aims to minimize the cost of accessing information without being intrusive. We evaluated our pose-aware display with a rotational pointing task, which was interrupted by a notification on DisplaySkin. Results show that a pose-aware display reduces the time required to respond to notifications on the wrist.",
"title": ""
},
{
"docid": "6fcfbe651d6c4f3a47bf07ee7d38eee2",
"text": "\"People-nearby applications\" (PNAs) are a form of ubiquitous computing that connect users based on their physical location data. One example is Grindr, a popular PNA that facilitates connections among gay and bisexual men. Adopting a uses and gratifications approach, we conducted two studies. In study one, 63 users reported motivations for Grindr use through open-ended descriptions. In study two, those descriptions were coded into 26 items that were completed by 525 Grindr users. Factor analysis revealed six uses and gratifications: social inclusion, sex, friendship, entertainment, romantic relationships, and location-based search. Two additional analyses examine (1) the effects of geographic location (e.g., urban vs. suburban/rural) on men's use of Grindr and (2) how Grindr use is related to self-disclosure of information. Results highlight how the mixed-mode nature of PNA technology may change the boundaries of online and offline space, and how gay and bisexual men navigate physical environments.",
"title": ""
},
{
"docid": "ae70b9ef5eeb6316b5b022662191cc4f",
"text": "The total harmonic distortion (THD) is an important performance criterion for almost any communication device. In most cases, the THD of a periodic signal, which has been processed in some way, is either measured directly or roughly estimated numerically, while analytic methods are employed only in a limited number of simple cases. However, the knowledge of the theoretical THD may be quite important for the conception and design of the communication equipment (e.g. transmitters, power amplifiers). The aim of this paper is to present a general theoretic approach, which permits to obtain an analytic closed-form expression for the THD. It is also shown that in some cases, an approximate analytic method, having good precision and being less sophisticated, may be developed. Finally, the mathematical technique, on which the proposed method is based, is described in the appendix.",
"title": ""
},
{
"docid": "96c14e4c9082920edb835e85ce99dc21",
"text": "When filling out privacy-related forms in public places such as hospitals or clinics, people usually are not aware that the sound of their handwriting leaks personal information. In this paper, we explore the possibility of eavesdropping on handwriting via nearby mobile devices based on audio signal processing and machine learning. By presenting a proof-of-concept system, WritingHacker, we show the usage of mobile devices to collect the sound of victims' handwriting, and to extract handwriting-specific features for machine learning based analysis. WritingHacker focuses on the situation where the victim's handwriting follows certain print style. An attacker can keep a mobile device, such as a common smart-phone, touching the desk used by the victim to record the audio signals of handwriting. Then the system can provide a word-level estimate for the content of the handwriting. To reduce the impacts of various writing habits and writing locations, the system utilizes the methods of letter clustering and dictionary filtering. Our prototype system's experimental results show that the accuracy of word recognition reaches around 50% - 60% under certain conditions, which reveals the danger of privacy leakage through the sound of handwriting.",
"title": ""
},
{
"docid": "f93e72b45a185e06d03d15791d312021",
"text": "BACKGROUND\nAbnormal scar development following burn injury can cause substantial physical and psychological distress to children and their families. Common burn scar prevention and management techniques include silicone therapy, pressure garment therapy, or a combination of both. Currently, no definitive, high-quality evidence is available for the effectiveness of topical silicone gel or pressure garment therapy for the prevention and management of burn scars in the paediatric population. Thus, this study aims to determine the effectiveness of these treatments in children.\n\n\nMETHODS\nA randomised controlled trial will be conducted at a large tertiary metropolitan children's hospital in Australia. Participants will be randomised to one of three groups: Strataderm® topical silicone gel only, pressure garment therapy only, or combined Strataderm® topical silicone gel and pressure garment therapy. Participants will include 135 children (45 per group) up to 16 years of age who are referred for scar management for a new burn. Children up to 18 years of age will also be recruited following surgery for burn scar reconstruction. Primary outcomes are scar itch intensity and scar thickness. Secondary outcomes include scar characteristics (e.g. colour, pigmentation, pliability, pain), the patient's, caregiver's and therapist's overall opinion of the scar, health service costs, adherence, health-related quality of life, treatment satisfaction and adverse effects. Measures will be completed on up to two sites per person at baseline and 1 week post scar management commencement, 3 months and 6 months post burn, or post burn scar reconstruction. Data will be analysed using descriptive statistics and univariate and multivariate regression analyses.\n\n\nDISCUSSION\nResults of this study will determine the effectiveness of three noninvasive scar interventions in children at risk of, and with, scarring post burn or post reconstruction.\n\n\nTRIAL REGISTRATION\nAustralian New Zealand Clinical Trials Registry, ACTRN12616001100482 . Registered on 5 August 2016.",
"title": ""
},
{
"docid": "4a2de9235a698a3b5e517446088d2ac6",
"text": "In recent years, there has been a growing interest in designing multi-robot systems (hereafter MRSs) to provide cost effective, fault-tolerant and reliable solutions to a variety of automated applications. Here, we review recent advancements in MRSs specifically designed for cooperative object transport, which requires the members of MRSs to coordinate their actions to transport objects from a starting position to a final destination. To achieve cooperative object transport, a wide range of transport, coordination and control strategies have been proposed. Our goal is to provide a comprehensive summary for this relatively heterogeneous and fast-growing body of scientific literature. While distilling the information, we purposefully avoid using hierarchical dichotomies, which have been traditionally used in the field of MRSs. Instead, we employ a coarse-grain approach by classifying each study based on the transport strategy used; pushing-only, grasping and caging. We identify key design constraints that may be shared among these studies despite considerable differences in their design methods. In the end, we discuss several open challenges and possible directions for future work to improve the performance of the current MRSs. Overall, we hope to increasethe visibility and accessibility of the excellent studies in the field and provide a framework that helps the reader to navigate through them more effectively.",
"title": ""
},
{
"docid": "e7c2134b446c4e0e7343ea8812673597",
"text": "Lexical embeddings can serve as useful representations for words for a variety of NLP tasks, but learning embeddings for phrases can be challenging. While separate embeddings are learned for each word, this is infeasible for every phrase. We construct phrase embeddings by learning how to compose word embeddings using features that capture phrase structure and context. We propose efficient unsupervised and task-specific learning objectives that scale our model to large datasets. We demonstrate improvements on both language modeling and several phrase semantic similarity tasks with various phrase lengths. We make the implementation of our model and the datasets available for general use.",
"title": ""
},
{
"docid": "0a2be958c7323d3421304d1613421251",
"text": "Stock price forecasting has aroused great concern in research of economy, machine learning and other fields. Time series analysis methods are usually utilized to deal with this task. In this paper, we propose to combine news mining and time series analysis to forecast inter-day stock prices. News reports are automatically analyzed with text mining techniques, and then the mining results are used to improve the accuracy of time series analysis algorithms. The experimental result on a half year Chinese stock market data indicates that the proposed algorithm can help to improve the performance of normal time series analysis in stock price forecasting significantly. Moreover, the proposed algorithm also performs well in stock price trend forecasting.",
"title": ""
},
{
"docid": "0bce954374d27d4679eb7562350674fc",
"text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.",
"title": ""
},
{
"docid": "269c1cb7fe42fd6403733fdbd9f109e3",
"text": "Myofibroblasts are the key players in extracellular matrix remodeling, a core phenomenon in numerous devastating fibrotic diseases. Not only in organ fibrosis, but also the pivotal role of myofibroblasts in tumor progression, invasion and metastasis has recently been highlighted. Myofibroblast targeting has gained tremendous attention in order to inhibit the progression of incurable fibrotic diseases, or to limit the myofibroblast-induced tumor progression and metastasis. In this review, we outline the origin of myofibroblasts, their general characteristics and functions during fibrosis progression in three major organs: liver, kidneys and lungs as well as in cancer. We will then discuss the state-of-the art drug targeting technologies to myofibroblasts in context of the above-mentioned organs and tumor microenvironment. The overall objective of this review is therefore to advance our understanding in drug targeting to myofibroblasts, and concurrently identify opportunities and challenges for designing new strategies to develop novel diagnostics and therapeutics against fibrosis and cancer.",
"title": ""
}
] | scidocsrr |
d2948c21194cbc2254fd8603d3702a81 | RaptorX-Property: a web server for protein structure property prediction | [
{
"docid": "44bd234a8999260420bb2a07934887af",
"text": "T e purpose of this review is to assess the nature and magnitudes of the dominant forces in protein folding. Since proteins are only marginally stable at room temperature,’ no type of molecular interaction is unimportant, and even small interactions can contribute significantly (positively or negatively) to stability (Alber, 1989a,b; Matthews, 1987a,b). However, the present review aims to identify only the largest forces that lead to the structural features of globular proteins: their extraordinary compactness, their core of nonpolar residues, and their considerable amounts of internal architecture. This review explores contributions to the free energy of folding arising from electrostatics (classical charge repulsions and ion pairing), hydrogen-bonding and van der Waals interactions, intrinsic propensities, and hydrophobic interactions. An earlier review by Kauzmann (1959) introduced the importance of hydrophobic interactions. His insights were particularly remarkable considering that he did not have the benefit of known protein structures, model studies, high-resolution calorimetry, mutational methods, or force-field or statistical mechanical results. The present review aims to provide a reassessment of the factors important for folding in light of current knowledge. Also considered here are the opposing forces, conformational entropy and electrostatics. The process of protein folding has been known for about 60 years. In 1902, Emil Fischer and Franz Hofmeister independently concluded that proteins were chains of covalently linked amino acids (Haschemeyer & Haschemeyer, 1973) but deeper understanding of protein structure and conformational change was hindered because of the difficulty in finding conditions for solubilization. Chick and Martin (191 1) were the first to discover the process of denaturation and to distinguish it from the process of aggregation. By 1925, the denaturation process was considered to be either hydrolysis of the peptide bond (Wu & Wu, 1925; Anson & Mirsky, 1925) or dehydration of the protein (Robertson, 1918). The view that protein denaturation was an unfolding process was",
"title": ""
},
{
"docid": "5a1f4efc96538c1355a2742f323b7a0e",
"text": "A great challenge in the proteomics and structural genomics era is to predict protein structure and function, including identification of those proteins that are partially or wholly unstructured. Disordered regions in proteins often contain short linear peptide motifs (e.g., SH3 ligands and targeting signals) that are important for protein function. We present here DisEMBL, a computational tool for prediction of disordered/unstructured regions within a protein sequence. As no clear definition of disorder exists, we have developed parameters based on several alternative definitions and introduced a new one based on the concept of \"hot loops,\" i.e., coils with high temperature factors. Avoiding potentially disordered segments in protein expression constructs can increase expression, foldability, and stability of the expressed protein. DisEMBL is thus useful for target selection and the design of constructs as needed for many biochemical studies, particularly structural biology and structural genomics projects. The tool is freely available via a web interface (http://dis.embl.de) and can be downloaded for use in large-scale studies.",
"title": ""
}
] | [
{
"docid": "f1e5e00fe3a0610c47918de526e87dc6",
"text": "The current paper reviews research that has explored the intergenerational effects of the Indian Residential School (IRS) system in Canada, in which Aboriginal children were forced to live at schools where various forms of neglect and abuse were common. Intergenerational IRS trauma continues to undermine the well-being of today's Aboriginal population, and having a familial history of IRS attendance has also been linked with more frequent contemporary stressor experiences and relatively greater effects of stressors on well-being. It is also suggested that familial IRS attendance across several generations within a family appears to have cumulative effects. Together, these findings provide empirical support for the concept of historical trauma, which takes the perspective that the consequences of numerous and sustained attacks against a group may accumulate over generations and interact with proximal stressors to undermine collective well-being. As much as historical trauma might be linked to pathology, it is not possible to go back in time to assess how previous traumas endured by Aboriginal peoples might be related to subsequent responses to IRS trauma. Nonetheless, the currently available research demonstrating the intergenerational effects of IRSs provides support for the enduring negative consequences of these experiences and the role of historical trauma in contributing to present day disparities in well-being.",
"title": ""
},
{
"docid": "c38dc288a59e39785dfa87f46d2371e5",
"text": "Silver molybdate (Ag2MoO4) and silver tungstate (Ag2WO4) nanomaterials were prepared using two complementary methods, microwave assisted hydrothermal synthesis (MAH) (pH 7, 140 °C) and coprecipitation (pH 4, 70 °C), and were then used to prepare two core/shell composites, namely α-Ag2WO4/β-Ag2MoO4 (MAH, pH 4, 140 °C) and β-Ag2MoO4/β-Ag2WO4 (coprecipitation, pH 4, 70 °C). The shape and size of the microcrystals were observed by field emission scanning electron microscopy (FE-SEM), different morphologies such as balls and nanorods. These powders were characterized by X-ray powder diffraction and UV-vis (diffuse reflectance and photoluminescence). X-ray diffraction patterns showed that the Ag2MoO4 samples obtained by the two methods were single-phased and belonged to the β-Ag2MoO4 structure (spinel type). In contrast, the Ag2WO4 obtained in the two syntheses were structurally different: MAH exhibited the well-known tetrameric stable structure α-Ag2WO4, while coprecipitation afforded the metastable β-Ag2WO4 allotrope, coexisting with a weak amount of the α-phase. The optical gap of β-Ag2WO4 (3.3 eV) was evaluated for the first time. In contrast to β-Ag2MoO4/β-Ag2WO4, the αAg2WO4/β-Ag2MoO4 exhibited strongly-enhanced photoluminescence in the low-energy band (650 nm), tentatively explained by the creation of a large density of local defects (distortions) at the core-shell interface, due to the presence of two different types of MOx polyhedra in the two structures.",
"title": ""
},
{
"docid": "d8938884a61e7c353d719dbbb65d00d0",
"text": "Image encryption plays an important role to ensure confidential transmission and storage of image over internet. However, a real–time image encryption faces a greater challenge due to large amount of data involved. This paper presents a review on image encryption techniques of both full encryption and partial encryption schemes in spatial, frequency and hybrid domains.",
"title": ""
},
{
"docid": "ce63aad5288d118eb6ca9d99b96e9cac",
"text": "Unknown malware has increased dramatically, but the existing security software cannot identify them effectively. In this paper, we propose a new malware detection and classification method based on n-grams attribute similarity. We extract all n-grams of byte codes from training samples and select the most relevant as attributes. After calculating the average value of attributes in malware and benign separately, we determine a test sample is malware or benign by attribute similarity between attributes of the test sample and the two average attributes of malware and benign. We compare our method with a variety of machine learning methods, including Naïve Bayes, Bayesian Networks, Support Vector Machine and C4.5 Decision Tree. Experimental results on public (Open Malware Benchmark) and private (self-collected) datasets both reveal that our method outperforms the other four methods.",
"title": ""
},
{
"docid": "c00c6539b78ed195224063bcff16fb12",
"text": "Information Retrieval (IR) systems assist users in finding information from the myriad of information resources available on the Web. A traditional characteristic of IR systems is that if different users submit the same query, the system would yield the same list of results, regardless of the user. Personalised Information Retrieval (PIR) systems take a step further to better satisfy the user’s specific information needs by providing search results that are not only of relevance to the query but are also of particular relevance to the user who submitted the query. PIR has thereby attracted increasing research and commercial attention as information portals aim at achieving user loyalty by improving their performance in terms of effectiveness and user satisfaction. In order to provide a personalised service, a PIR system maintains information about the users and the history of their interactions with the system. This information is then used to adapt the users’ queries or the results so that information that is more relevant to the users is retrieved and presented. This survey paper features a critical review of PIR systems, with a focus on personalised search. The survey provides an insight into the stages involved in building and evaluating PIR systems, namely: information gathering, information representation, personalisation execution, and system evaluation. Moreover, the survey provides an analysis of PIR systems with respect to the scope of personalisation addressed. The survey proposes a classification of PIR systems into three scopes: individualised systems, community-based systems, and aggregate-level systems. Based on the conducted survey, the paper concludes by highlighting challenges and future research directions in the field of PIR.",
"title": ""
},
{
"docid": "d6707c10e68dcbb5cde0920631bdaf8b",
"text": "Game playing has been an important testbed for artificial intelligence. Board games, first-person shooters, and real-time strategy games have well-defined win conditions and rely on strong feedback from a simulated environment. Text adventures require natural language understanding to progress through the game but still have an underlying simulated environment. In this paper, we propose tabletop roleplaying games as a challenge due to an infinite action space, multiple (collaborative) players and models of the world, and no explicit reward signal. We present an approach for reinforcement learning agents that can play tabletop roleplaying games.",
"title": ""
},
{
"docid": "5411326f95abd20a141ad9e9d3ff72bf",
"text": "media files and almost universal use of email, information sharing is almost instantaneous anywhere in the world. Because many of the procedures performed in dentistry represent established protocols that should be read, learned and then practiced, it becomes clear that photography aids us in teaching or explaining to our patients what we think are common, but to them are complex and mysterious procedures. Clinical digital photography. Part 1: Equipment and basic documentation",
"title": ""
},
{
"docid": "ce174b6dce6e2dee62abca03b4a95112",
"text": "This article proposes a novel framework for representing and measuring local coherence. Central to this approach is the entity-grid representation of discourse, which captures patterns of entity distribution in a text. The algorithm introduced in the article automatically abstracts a text into a set of entity transition sequences and records distributional, syntactic, and referential information about discourse entities. We re-conceptualize coherence assessment as a learning task and show that our entity-based representation is well-suited for ranking-based generation and text classification tasks. Using the proposed representation, we achieve good performance on text ordering, summary coherence evaluation, and readability assessment.",
"title": ""
},
{
"docid": "3f33882e4bece06e7a553eb9133f8aa9",
"text": "Research on the relationship between affect and cognition in Artificial Intelligence in Education (AIEd) brings an important dimension to our understanding of how learning occurs and how it can be facilitated. Emotions are crucial to learning, but their nature, the conditions under which they occur, and their exact impact on learning for different learners in diverse contexts still needs to be mapped out. The study of affect during learning can be challenging, because emotions are subjective, fleeting phenomena that are often difficult for learners to report accurately and for observers to perceive reliably. Context forms an integral part of learners’ affect and the study thereof. This review provides a synthesis of the current knowledge elicitation methods that are used to aid the study of learners’ affect and to inform the design of intelligent technologies for learning. Advantages and disadvantages of the specific methods are discussed along with their respective potential for enhancing research in this area, and issues related to the interpretation of data that emerges as the result of their use. References to related research are also provided together with illustrative examples of where the individual methods have been used in the past. Therefore, this review is intended as a resource for methodological decision making for those who want to study emotions and their antecedents in AIEd contexts, i.e. where the aim is to inform the design and implementation of an intelligent learning environment or to evaluate its use and educational efficacy.",
"title": ""
},
{
"docid": "cd877197b06304b379d5caf9b5b89d30",
"text": "Research is now required on factors influencing adults' sedentary behaviors, and effective approaches to behavioral-change intervention must be identified. The strategies for influencing sedentary behavior will need to be informed by evidence on the most important modifiable behavioral determinants. However, much of the available evidence relevant to understanding the determinants of sedentary behaviors is from cross-sectional studies, which are limited in that they identify only behavioral \"correlates.\" As is the case for physical activity, a behavior- and context-specific approach is needed to understand the multiple determinants operating in the different settings within which these behaviors are most prevalent. To this end, an ecologic model of sedentary behaviors is described, highlighting the behavior settings construct. The behaviors and contexts of primary concern are TV viewing and other screen-focused behaviors in domestic environments, prolonged sitting in the workplace, and time spent sitting in automobiles. Research is needed to clarify the multiple levels of determinants of prolonged sitting time, which are likely to operate in distinct ways in these different contexts. Controlled trials on the feasibility and efficacy of interventions to reduce and break up sedentary behaviors among adults in domestic, workplace, and transportation environments are particularly required. It would be informative for the field to have evidence on the outcomes of \"natural experiments,\" such as the introduction of nonseated working options in occupational environments or new transportation infrastructure in communities.",
"title": ""
},
{
"docid": "0e521af53f9faf4fee38843a22ec2185",
"text": "Steering of main beam of radiation at fixed millimeter wave frequency in a Substrate Integrated Waveguide (SIW) Leaky Wave Antenna (LWA) has not been investigated so far in literature. In this paper a Half-Mode Substrate Integrated Waveguide (HMSIW) LWA is proposed which has the capability to steer its main beam at fixed millimeter wave frequency of 24GHz. Beam steering is made feasible by changing the capacitance of the capacitors, connected at the dielectric side of HMSIW. The full wave EM simulations show that the main beam scans from 36° to 57° in the first quadrant.",
"title": ""
},
{
"docid": "fb4630a6b558ac9b8d8444275e1978e3",
"text": "Relational graphs are widely used in modeling large scale networks such as biological networks and social networks. In this kind of graph, connectivity becomes critical in identifying highly associated groups and clusters. In this paper, we investigate the issues of mining closed frequent graphs with connectivity constraints in massive relational graphs where each graph has around 10K nodes and 1M edges. We adopt the concept of edge connectivity and apply the results from graph theory, to speed up the mining process. Two approaches are developed to handle different mining requests: CloseCut, a pattern-growth approach, and splat, a pattern-reduction approach. We have applied these methods in biological datasets and found the discovered patterns interesting.",
"title": ""
},
{
"docid": "12a8d007ca4dce21675ddead705c7b62",
"text": "This paper presents an ethnographic account of the implementation of Lean service redesign methodologies in one UK NHS hospital operating department. It is suggested that this popular management 'technology', with its emphasis on creating value streams and reducing waste, has the potential to transform the social organisation of healthcare work. The paper locates Lean healthcare within wider debates related to the standardisation of clinical practice, the re-configuration of occupational boundaries and the stratification of clinical communities. Drawing on the 'technologies-in-practice' perspective the study is attentive to the interaction of both the intent to transform work and the response of clinicians to this intent as an ongoing and situated social practice. In developing this analysis this article explores three dimensions of social practice to consider the way Lean is interpreted and articulated (rhetoric), enacted in social practice (ritual), and experienced in the context of prevailing lines of power (resistance). Through these interlinked analytical lenses the paper suggests the interaction of Lean and clinical practice remains contingent and open to negotiation. In particular, Lean follows in a line of service improvements that bring to the fore tensions between clinicians and service leaders around the social organisation of healthcare work. The paper concludes that Lean might not be the easy remedy for making both efficiency and effectiveness improvements in healthcare.",
"title": ""
},
{
"docid": "cb70ab2056242ca739adde4751fbca2c",
"text": "In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-ofwords and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations. 1",
"title": ""
},
{
"docid": "b81b29c232fb9cb5dcb2dd7e31003d77",
"text": "Attendance and academic success are directly related in educational institutions. The continual absence of students in lecture, practical and tutorial is one of the major problems of decadence in the performance of academic. The authorized person needs to prohibit truancy for solving the problem. In existing system, the attendance is recorded by calling of the students’ name, signing on paper, using smart card and so on. These methods are easy to fake and to give proxy for the absence student. For solving inconvenience, fingerprint based attendance system with notification to guardian is proposed. The attendance is recorded using fingerprint module and stored it to the database via SD card. This system can calculate the percentage of attendance record monthly and store the attendance record in database for one year or more. In this system, attendance is recorded two times for one day and then it will also send alert message using GSM module if the attendance of students don’t have eight times for one week. By sending the alert message to the respective individuals every week, necessary actions can be done early. It can also reduce the cost of SMS charge and also have more attention for guardians. The main components of this system are Fingerprint module, Microcontroller, GSM module and SD card with SD card module. This system has been developed using Arduino IDE, Eclipse and MySQL Server.",
"title": ""
},
{
"docid": "545509f9e3aa65921a7d6faa41247ae6",
"text": "BACKGROUND\nPenicillins inhibit cell wall synthesis; therefore, Helicobacter pylori must be dividing for this class of antibiotics to be effective in eradication therapy. Identifying growth responses to varying medium pH may allow design of more effective treatment regimens.\n\n\nAIM\nTo determine the effects of acidity on bacterial growth and the bactericidal efficacy of ampicillin.\n\n\nMETHODS\nH. pylori were incubated in dialysis chambers suspended in 1.5-L of media at various pHs with 5 mM urea, with or without ampicillin, for 4, 8 or 16 h, thus mimicking unbuffered gastric juice. Changes in gene expression, viability and survival were determined.\n\n\nRESULTS\nAt pH 3.0, but not at pH 4.5 or 7.4, there was decreased expression of ~400 genes, including many cell envelope biosynthesis, cell division and penicillin-binding protein genes. Ampicillin was bactericidal at pH 4.5 and 7.4, but not at pH 3.0.\n\n\nCONCLUSIONS\nAmpicillin is bactericidal at pH 4.5 and 7.4, but not at pH 3.0, due to decreased expression of cell envelope and division genes with loss of cell division at pH 3.0. Therefore, at pH 3.0, the likely pH at the gastric surface, the bacteria are nondividing and persist with ampicillin treatment. A more effective inhibitor of acid secretion that maintains gastric pH near neutrality for 24 h/day should enhance the efficacy of amoxicillin, improving triple therapy and likely even allowing dual amoxicillin-based therapy for H. pylori eradication.",
"title": ""
},
{
"docid": "38f289b085f2c6e2d010005f096d8fd7",
"text": "We present easy-to-use TensorFlow Hub sentence embedding models having good task transfer performance. Model variants allow for trade-offs between accuracy and compute resources. We report the relationship between model complexity, resources, and transfer performance. Comparisons are made with baselines without transfer learning and to baselines that incorporate word-level transfer. Transfer learning using sentence-level embeddings is shown to outperform models without transfer learning and often those that use only word-level transfer. We show good transfer task performance with minimal training data and obtain encouraging results on word embedding association tests (WEAT) of model bias.",
"title": ""
},
{
"docid": "7d14bd767964cba3cfc152ee20c7ffbc",
"text": "Most typical statistical and machine learning approaches to time series modeling optimize a singlestep prediction error. In multiple-step simulation, the learned model is iteratively applied, feeding through the previous output as its new input. Any such predictor however, inevitably introduces errors, and these compounding errors change the input distribution for future prediction steps, breaking the train-test i.i.d assumption common in supervised learning. We present an approach that reuses training data to make a no-regret learner robust to errors made during multi-step prediction. Our insight is to formulate the problem as imitation learning; the training data serves as a “demonstrator” by providing corrections for the errors made during multi-step prediction. By this reduction of multistep time series prediction to imitation learning, we establish theoretically a strong performance guarantee on the relation between training error and the multi-step prediction error. We present experimental results of our method, DAD, and show significant improvement over the traditional approach in two notably different domains, dynamic system modeling and video texture prediction. Determining models for time series data is important in applications ranging from market prediction to the simulation of chemical processes and robotic systems. Many supervised learning approaches have been proposed for this task, such as neural networks (Narendra and Parthasarathy 1990), Expectation-Maximization (Ghahramani and Roweis 1999; Coates, Abbeel, and Ng 2008), Support Vector Regression (Müller, Smola, and Rätsch 1997), Gaussian process regression (Wang, Hertzmann, and Blei 2005; Ko et al. 2007), Nadaraya-Watson kernel regression (Basharat and Shah 2009), Gaussian mixture models (Khansari-Zadeh and Billard 2011), and Kernel PCA (Ralaivola and D’Alche-Buc 2004). Common to most of these methods is that the objective being optimized is the single-step prediction loss. However, this criterion does not guarantee accurate multiple-step simulation accuracy in which the output of a prediction step is used as input for the next inference. The prevalence of single-step modeling approaches is a result of the difficulty in directly optimizing the multipleCopyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. step prediction error. As an example, consider fitting a simple linear dynamical system model for the multi-step error over the time horizon T from an initial condition x0,",
"title": ""
},
{
"docid": "dd3781fe97c7dd935948c55584313931",
"text": "The radiation of RFID antitheft gate system has been simulated in FEKO. The obtained numerical results for the electric field and magnetic field have been compared to the exposure limits proposed by the ICNIRP Guidelines. No significant violation of limits, regarding both occupational and public exposure, has been shown.",
"title": ""
},
{
"docid": "53b32cdb6c3d511180d8cb194c286ef5",
"text": "Silymarin, a C25 containing flavonoid from the plant Silybum marianum, has been the gold standard drug to treat liver disorders associated with alcohol consumption, acute and chronic viral hepatitis, and toxin-induced hepatic failures since its discovery in 1960. Apart from the hepatoprotective nature, which is mainly due to its antioxidant and tissue regenerative properties, Silymarin has recently been reported to be a putative neuroprotective agent against many neurologic diseases including Alzheimer's and Parkinson's diseases, and cerebral ischemia. Although the underlying neuroprotective mechanism of Silymarin is believed to be due to its capacity to inhibit oxidative stress in the brain, it also confers additional advantages by influencing pathways such as β-amyloid aggregation, inflammatory mechanisms, cellular apoptotic machinery, and estrogenic receptor mediation. In this review, we have elucidated the possible neuroprotective effects of Silymarin and the underlying molecular events, and suggested future courses of action for its acceptance as a CNS drug for the treatment of neurodegenerative diseases.",
"title": ""
}
] | scidocsrr |
e1340c9d28265bce016b4422fc1d0ecc | Multiagent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLIN-ATSC): Methodology and Large-Scale Application on Downtown Toronto | [
{
"docid": "931e6f034abd1a3004d021492382a47a",
"text": "SARSA (Sutton, 1996) is applied to a simulated, traac-light control problem (Thorpe, 1997) and its performance is compared with several, xed control strategies. The performance of SARSA with four diierent representations of the current state of traac is analyzed using two reinforcement schemes. Training on one intersection is compared to, and is as eeective as training on all intersections in the environment. SARSA is shown to be better than xed-duration light timing and four-way stops for minimizing total traac travel time, individual vehicle travel times, and vehicle wait times. Comparisons of performance using a constant reinforcement function versus a variable reinforcement function dependent on the number of vehicles at an intersection showed that the variable reinforcement resulted in slightly improved performance for some cases.",
"title": ""
}
] | [
{
"docid": "7933e531385d90a6b485abe155f06e3a",
"text": "We propose a localized approach to multiple kernel learning that can be formulated as a convex optimization problem over a given cluster structure. For which we obtain generalization error guarantees and derive an optimization algorithm based on the Fenchel dual representation. Experiments on real-world datasets from the application domains of computational biology and computer vision show that convex localized multiple kernel learning can achieve higher prediction accuracies than its global and non-convex local counterparts.",
"title": ""
},
{
"docid": "7ebff2391401cef25b27d510675e9acd",
"text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.",
"title": ""
},
{
"docid": "7210c2e82441b142f722bcc01bfe9aca",
"text": "In the beginning of the last decade, agile methodologies emerged as a response to software development processes that were based on rigid approaches. In fact, the flexible characteristics of agile methods are expected to be suitable to the less-defined and uncertain nature of software development. However, many studies in this area lack empirical evaluation in order to provide more confident evidences about which contexts the claims are true. This paper reports an empirical study performed to analyze the impact of Scrum adoption on customer satisfaction as an external success perspective for software development projects in a software intensive organization. The study uses data from real-life projects executed in a major software intensive organization located in a nation wide software ecosystem. The empirical method applied was a cross-sectional survey using a sample of 19 real-life software development projects involving 156 developers. The survey aimed to determine whether there is any impact on customer satisfaction caused by the Scrum adoption. However, considering that sample, our results indicate that it was not possible to establish any evidence that using Scrum may help to achieve customer satisfaction and, consequently, increase the success rates in software projects, in contrary to general claims made by Scrum's advocates.",
"title": ""
},
{
"docid": "7c5d0139d729ad6f90332a9d1cd28f70",
"text": "Cloud based ERP system architecture provides solutions to all the difficulties encountered by conventional ERP systems. It provides flexibility to the existing ERP systems and improves overall efficiency. This paper aimed at comparing the performance traditional ERP systems with cloud base ERP architectures. The challenges before the conventional ERP implementations are analyzed. All the main aspects of an ERP systems are compared with cloud based approach. The distinct advantages of cloud ERP are explained. The difficulties in cloud architecture are also mentioned.",
"title": ""
},
{
"docid": "cec6b4d1e547575a91bdd7e852ecbc3c",
"text": "The apps installed on a smartphone can reveal much information about a user, such as their medical conditions, sexual orientation, or religious beliefs. In addition, the presence or absence of particular apps on a smartphone can inform an adversary, who is intent on attacking the device. In this paper, we show that a passive eavesdropper can feasibly identify smartphone apps by fingerprinting the network traffic that they send. Although SSL/TLS hides the payload of packets, side-channel data, such as packet size and direction is still leaked from encrypted connections. We use machine learning techniques to identify smartphone apps from this side-channel data. In addition to merely fingerprinting and identifying smartphone apps, we investigate how app fingerprints change over time, across devices, and across different versions of apps. In addition, we introduce strategies that enable our app classification system to identify and mitigate the effect of ambiguous traffic, i.e., traffic in common among apps, such as advertisement traffic. We fully implemented a framework to fingerprint apps and ran a thorough set of experiments to assess its performance. We fingerprinted 110 of the most popular apps in the Google Play Store and were able to identify them six months later with up to 96% accuracy. Additionally, we show that app fingerprints persist to varying extents across devices and app versions.",
"title": ""
},
{
"docid": "382eec3778d98cb0c8445633c16f59ef",
"text": "In the face of acute global competition, supplier management is rapidly emerging as a crucial issue to any companies striving for business success and sustainable development. To optimise competitive advantages, a company should incorporate ‘suppliers’ as an essential part of its core competencies. Supplier evaluation, the first step in supplier management, is a complex multiple criteria decision making (MCDM) problem, and its complexity is further aggravated if the highly important interdependence among the selection criteria is taken into consideration. The objective of this paper is to suggest a comprehensive decision method for identifying top suppliers by considering the effects of interdependence among the selection criteria. Proposed in this study is a hybrid model, which incorporates the technique of analytic network process (ANP) in which criteria weights are determined using fuzzy extent analysis, Technique for order performance by similarity to ideal solution (TOPSIS) under fuzzy environment is adopted to rank competing suppliers in terms of their overall performances. An example is solved to illustrate the effectiveness and feasibility of the suggested model.",
"title": ""
},
{
"docid": "e444dcc97882005658aca256991e816e",
"text": "The terms superordinate, hyponym, and subordinate designate the hierarchical taxonomic relationship of words. They also represent categories and concepts. This relationship is a subject of interest for anthropology, cognitive psychology, psycholinguistics, linguistic semantics, and cognitive linguistics. Taxonomic hierarchies are essentially classificatory systems, and they are supposed to reflect the way that speakers of a language categorize the world of experience. A well-formed taxonomy offers an orderly and efficient set of categories at different levels of specificity (Cruse 2000:180). However, the terms and levels of taxonomic hierarchy used in each discipline vary. This makes it difficult to carry out cross-disciplinary readings on the hierarchical taxonomy of words or categories, which act as an interface in these cognitive-based cross-disciplinary ventures. Not only words— terms and concepts differ but often the nature of the problem is compounded as some terms refer to differing word classes, categories and concepts at the same time. Moreover, the lexical relationship of terms among these lexical hierarchies is far from clear. As a result two lines of thinking can be drawn from the literature: (1) technical terms coined for the hierarchical relationship of words are conflicting and do not reflect reality or environment, and (2) the relationship among these hierarchies of word levels and the underlying principles followed to explain them are uncertain except that of inclusion.",
"title": ""
},
{
"docid": "b6fdde5d6baeb546fd55c749af14eec1",
"text": "Action recognition is an important research problem of human motion analysis (HMA). In recent years, 3D observation-based action recognition has been receiving increasing interest in the multimedia and computer vision communities, due to the recent advent of cost-effective sensors, such as depth camera Kinect. This work takes this one step further, focusing on early recognition of ongoing 3D human actions, which is beneficial for a large variety of time-critical applications, e.g., gesture-based human machine interaction, somatosensory games, and so forth. Our goal is to infer the class label information of 3D human actions with partial observation of temporally incomplete action executions. By considering 3D action data as multivariate time series (m.t.s.) synchronized to a shared common clock (frames), we propose a stochastic process called dynamic marked point process (DMP) to model the 3D action as temporal dynamic patterns, where both timing and strength information are captured. To achieve even more early and better accuracy of recognition, we also explore the temporal dependency patterns between feature dimensions. A probabilistic suffix tree is constructed to represent sequential patterns among features in terms of the variable-order Markov model (VMM). Our approach and several baselines are evaluated on five 3D human action datasets. Extensive results show that our approach achieves superior performance for early recognition of 3D human actions.",
"title": ""
},
{
"docid": "4e23da50d4f1f0c4ecdbbf5952290c98",
"text": "[Context and motivation] User stories are an increasingly popular textual notation to capture requirements in agile software development. [Question/Problem] To date there is no scientific evidence on the effectiveness of user stories. The goal of this paper is to explore how practicioners perceive this artifact in the context of requirements engineering. [Principal ideas/results] We explore perceived effectiveness of user stories by reporting on a survey with 182 responses from practitioners and 21 follow-up semi-structured interviews. The data shows that practitioners agree that using user stories, a user story template and quality guidelines such as the INVEST mnemonic improve their productivity and the quality of their work deliverables. [Contribution] By combining the survey data with 21 semi-structured follow-up interviews, we present 12 findings on the usage and perception of user stories by practitioners that employ user stories in their everyday work environment.",
"title": ""
},
{
"docid": "d9eed063ea6399a8f33c6cbda3a55a62",
"text": "Current and future (conventional) notations used in Conceptual Modeling Techniques should have a precise (formal) semantics to provide a well-defined software development process, in order to go from specification to implementation in an automated way. To achieve this objective, the OO-Method approach to Information Systems Modeling presented in this paper attempts to overcome the conventional (informal)/formal dichotomy by selecting the best ideas from both approaches. The OO-Method makes a clear distinction between the problem space (centered on what the system is) and the solution space (centered on how it is implemented as a software product). It provides a precise, conventional graphical notation to obtain a system description at the problem space level, however this notation is strictly based on a formal OO specification language that determines the conceptual modeling constructs needed to obtain the system specification. An abstract execution model determines how to obtain the software representations corresponding to these conceptual modeling constructs. In this way, the final software product can be obtained in an automated way. r 2001 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "35f74f11a60ad58171b74e755cd0476b",
"text": "Recent studies show that the performances of face recognition systems degrade in presence of makeup on face. In this paper, a facial makeup detector is proposed to further reduce the impact of makeup in face recognition. The performance of the proposed technique is tested using three publicly available facial makeup databases. The proposed technique extracts a feature vector that captures the shape and texture characteristics of the input face. After feature extraction, two types of classifiers (i.e. SVM and Alligator) are applied for comparison purposes. In this study, we observed that both classifiers provide significant makeup detection accuracy. There are only few studies regarding facial makeup detection in the state-of-the art. The proposed technique is novel and outperforms the state-of-the art significantly.",
"title": ""
},
{
"docid": "1301030c091eeb23d43dd3bfa6763e77",
"text": "A new system for web attack detection is presented. It follows the anomaly-based approach, therefore known and unknown attacks can be detected. The system relies on a XML file to classify the incoming requests as normal or anomalous. The XML file, which is built from only normal traffic, contains a description of the normal behavior of the target web application statistically characterized. Any request which deviates from the normal behavior is considered an attack. The system has been applied to protect a real web application. An increasing number of training requests have been used to train the system. Experiments show that when the XML file has enough information to closely characterize the normal behavior of the target web application, a very high detection rate is reached while the false alarm rate remains very low.",
"title": ""
},
{
"docid": "bb49674d0a1f36e318d27525b693e51d",
"text": "prevent attackers from gaining control of the system using well established techniques such as; perimeter-based fire walls, redundancy and replications, and encryption. However, given sufficient time and resources, all these methods can be defeated. Moving Target Defense (MTD), is a defensive strategy that aims to reduce the need to continuously fight against attacks by disrupting attackers gain-loss balance. We present Mayflies, a bio-inspired generic MTD framework for distributed systems on virtualized cloud platforms. The framework enables systems designed to defend against attacks for their entire runtime to systems that avoid attacks in time intervals. We discuss the design, algorithms and the implementation of the framework prototype. We illustrate the prototype with a quorum-based Byzantime Fault Tolerant system and report the preliminary results.",
"title": ""
},
{
"docid": "41e3ec35f9ca27eef6e70c963628281e",
"text": "An emerging problem in computer vision is the reconstruction of 3D shape and pose of an object from a single image. Hitherto, the problem has been addressed through the application of canonical deep learning methods to regress from the image directly to the 3D shape and pose labels. These approaches, however, are problematic from two perspectives. First, they are minimizing the error between 3D shapes and pose labels - with little thought about the nature of this “label error” when reprojecting the shape back onto the image. Second, they rely on the onerous and ill-posed task of hand labeling natural images with respect to 3D shape and pose. In this paper we define the new task of pose-aware shape reconstruction from a single image, and we advocate that cheaper 2D annotations of objects silhouettes in natural images can be utilized. We design architectures of pose-aware shape reconstruction which reproject the predicted shape back on to the image using the predicted pose. Our evaluation on several object categories demonstrates the superiority of our method for predicting pose-aware 3D shapes from natural images.",
"title": ""
},
{
"docid": "464f7d25cb2a845293a3eb8c427f872f",
"text": "Autism spectrum disorder is the fastest growing developmental disability in the United States. As such, there is an unprecedented need for research examining factors contributing to the health disparities in this population. This research suggests a relationship between the levels of physical activity and health outcomes. In fact, excessive sedentary behavior during early childhood is associated with a number of negative health outcomes. A total of 53 children participated in this study, including typically developing children (mean age = 42.5 ± 10.78 months, n = 19) and children with autism spectrum disorder (mean age = 47.42 ± 12.81 months, n = 34). The t-test results reveal that children with autism spectrum disorder spent significantly less time per day in sedentary behavior when compared to the typically developing group ( t(52) = 4.57, p < 0.001). Furthermore, the results from the general linear model reveal that there is no relationship between motor skills and the levels of physical activity. The ongoing need for objective measurement of physical activity in young children with autism spectrum disorder is of critical importance as it may shed light on an often overlooked need for early community-based interventions to increase physical activity early on in development.",
"title": ""
},
{
"docid": "139adbef378fa0b195477e75d4d71e12",
"text": "Alu elements are primate-specific repeats and comprise 11% of the human genome. They have wide-ranging influences on gene expression. Their contribution to genome evolution, gene regulation and disease is reviewed.",
"title": ""
},
{
"docid": "9573c50b4cd5dfdcabd09676a757d06f",
"text": "Fall detection is a major challenge in the public healthcare domain, especially for the elderly as the decline of their physical fitness, and timely and reliable surveillance is necessary to mitigate the negative effects of falls. This paper develops a novel fall detection system based on a wearable device. The system monitors the movements of human body, recognizes a fall from normal daily activities by an effective quaternion algorithm, and automatically sends request for help to the caregivers with the patient's location.",
"title": ""
},
{
"docid": "4075eb657e87ad13e0f47ab36d33df54",
"text": "MOTIVATION\nControlled vocabularies such as the Medical Subject Headings (MeSH) thesaurus and the Gene Ontology (GO) provide an efficient way of accessing and organizing biomedical information by reducing the ambiguity inherent to free-text data. Different methods of automating the assignment of MeSH concepts have been proposed to replace manual annotation, but they are either limited to a small subset of MeSH or have only been compared with a limited number of other systems.\n\n\nRESULTS\nWe compare the performance of six MeSH classification systems [MetaMap, EAGL, a language and a vector space model-based approach, a K-Nearest Neighbor (KNN) approach and MTI] in terms of reproducing and complementing manual MeSH annotations. A KNN system clearly outperforms the other published approaches and scales well with large amounts of text using the full MeSH thesaurus. Our measurements demonstrate to what extent manual MeSH annotations can be reproduced and how they can be complemented by automatic annotations. We also show that a statistically significant improvement can be obtained in information retrieval (IR) when the text of a user's query is automatically annotated with MeSH concepts, compared to using the original textual query alone.\n\n\nCONCLUSIONS\nThe annotation of biomedical texts using controlled vocabularies such as MeSH can be automated to improve text-only IR. Furthermore, the automatic MeSH annotation system we propose is highly scalable and it generates improvements in IR comparable with those observed for manual annotations.",
"title": ""
},
{
"docid": "6e4dfb4c6974543246003350b5e3e07f",
"text": "Zero-shot object detection is an emerging research topic that aims to recognize and localize previously ‘unseen’ objects. This setting gives rise to several unique challenges, e.g., highly imbalanced positive vs. negative instance ratio, ambiguity between background and unseen classes and the proper alignment between visual and semantic concepts. Here, we propose an end-to-end deep learning framework underpinned by a novel loss function that puts more emphasis on difficult examples to avoid class imbalance. We call our objective the ‘Polarity loss’ because it explicitly maximizes the gap between positive and negative predictions. Such a margin maximizing formulation is important as it improves the visual-semantic alignment while resolving the ambiguity between background and unseen. Our approach is inspired by the embodiment theories in cognitive science, that claim human semantic understanding to be grounded in past experiences (seen objects), related linguistic concepts (word dictionary) and the perception of the physical world (visual imagery). To this end, we learn to attend to a dictionary of related semantic concepts that eventually refines the noisy semantic embeddings and helps establish a better synergy between visual and semantic domains. Our extensive results on MS-COCO and Pascal VOC datasets show as high as 14× mAP improvement over state of the art.1",
"title": ""
},
{
"docid": "e33e3e46a4bcaaae32a5743672476cd9",
"text": "This paper is based on the notion of data quality. It includes correctness, completeness and minimality for which a notational framework is shown. In long living databases the maintenance of data quality is a rst order issue. This paper shows that even well designed and implemented information systems cannot guarantee correct data in any circumstances. It is shown that in any such system data quality tends to decrease and therefore some data correction procedure should be applied from time to time. One aspect of increasing data quality is the correction of data values. Characteristics of a software tool which supports this data value correction process are presented and discussed.",
"title": ""
}
] | scidocsrr |
b35e238b5c76fec76d33eb3e0dae3c06 | Using trust for collaborative filtering in eCommerce | [
{
"docid": "6c3f320eda59626bedb2aad4e527c196",
"text": "Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user provides personal trust values for a small number of other users. We compose these trusts to compute the trust a user should place in any other user in the network. A user is not assigned a single trust rank. Instead, different users may have different trust values for the same user. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.",
"title": ""
},
{
"docid": "da63c4d9cc2f3278126490de54c34ce5",
"text": "The growth of Web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences. Our research looks particularly at assigning trust in Web-based social networks and investigates how trust information can be mined and integrated into applications. This article introduces a definition of trust suitable for use in Web-based social networks with a discussion of the properties that will influence its use in computation. We then present two algorithms for inferring trust relationships between individuals that are not directly connected in the network. Both algorithms are shown theoretically and through simulation to produce calculated trust values that are highly accurate.. We then present TrustMail, a prototype email client that uses variations on these algorithms to score email messages in the user's inbox based on the user's participation and ratings in a trust network.",
"title": ""
}
] | [
{
"docid": "c077231164a8a58f339f80b83e5b4025",
"text": "It is widely believed that refactoring improves software quality and developer productivity. However, few empirical studies quantitatively assess refactoring benefits or investigate developers' perception towards these benefits. This paper presents a field study of refactoring benefits and challenges at Microsoft through three complementary study methods: a survey, semi-structured interviews with professional software engineers, and quantitative analysis of version history data. Our survey finds that the refactoring definition in practice is not confined to a rigorous definition of semantics-preserving code transformations and that developers perceive that refactoring involves substantial cost and risks. We also report on interviews with a designated refactoring team that has led a multi-year, centralized effort on refactoring Windows. The quantitative analysis of Windows 7 version history finds that the binary modules refactored by this team experienced significant reduction in the number of inter-module dependencies and post-release defects, indicating a visible benefit of refactoring.",
"title": ""
},
{
"docid": "6a5abcabca3d4bb0696a9f19dd5e358f",
"text": "Distributional models of meaning (see Turney and Pantel (2010) for an overview) are based on the pragmatic hypothesis that meanings of words are deducible from the contexts in which they are often used. This hypothesis is formalized using vector spaces, wherein a word is represented as a vector of cooccurrence statistics with a set of context dimensions. With the increasing availability of large corpora of text, these models constitute a well-established NLP technique for evaluating semantic similarities. Their methods however do not scale up to larger text constituents (i.e. phrases and sentences), since the uniqueness of multi-word expressions would inevitably lead to data sparsity problems, hence to unreliable vectorial representations. The problem is usually addressed by the provision of a compositional function, the purpose of which is to prepare a vector for a phrase or sentence by combining the vectors of the words therein. This line of research has led to the field of compositional distributional models of meaning (CDMs), where reliable semantic representations are provided for phrases, sentences, and discourse units such as dialogue utterances and even paragraphs or documents. As a result, these models have found applications in various NLP tasks, for example paraphrase detection; sentiment analysis; dialogue act tagging; machine translation; textual entailment; and so on, in many cases presenting stateof-the-art performance. Being the natural evolution of the traditional and well-studied distributional models at the word level, CDMs are steadily evolving to a popular and active area of NLP. The topic has inspired a number of workshops and tutorials in top CL conferences such as ACL and EMNLP, special issues at high-profile journals, and it attracts a substantial amount of submissions in annual NLP conferences. The approaches employed by CDMs are as much as diverse as statistical machine leaning (Baroni and Zamparelli, 2010), linear algebra (Mitchell and Lapata, 2010), simple category theory (Coecke et al., 2010), or complex deep learning architectures based on neural networks and borrowing ideas from image processing (Socher et al., 2012; Kalchbrenner et al., 2014; Cheng and Kartsaklis, 2015). Furthermore, they create opportunities for interesting novel research, related for example to efficient methods for creating tensors for relational words such as verbs and adjectives (Grefenstette and Sadrzadeh, 2011), the treatment of logical and functional words in a distributional setting (Sadrzadeh et al., 2013; Sadrzadeh et al., 2014), or the role of polysemy and the way it affects composition (Kartsaklis and Sadrzadeh, 2013; Cheng and Kartsaklis, 2015). The purpose of this tutorial is to provide a concise introduction to this emerging field, presenting the different classes of CDMs and the various issues related to them in sufficient detail. The goal is to allow the student to understand the general philosophy of each approach, as well as its advantages and limitations with regard to the other alternatives.",
"title": ""
},
{
"docid": "6ae4be7a85f7702ae76649d052d7c37d",
"text": "information technologies as “the ability to reformulate knowledge, to express oneself creatively and appropriately, and to produce and generate information (rather than simply to comprehend it).” Fluency, according to the report, “goes beyond traditional notions of computer literacy...[It] requires a deeper, more essential understanding and mastery of information technology for information processing, communication, and problem solving than does computer literacy as traditionally defined.” Scratch is a networked, media-rich programming environment designed to enhance the development of technological fluency at after-school centers in economically-disadvantaged communities. Just as the LEGO MindStorms robotics kit added programmability to an activity deeply rooted in youth culture (building with LEGO bricks), Scratch adds programmability to the media-rich and network-based activities that are most popular among youth at afterschool computer centers. Taking advantage of the extraordinary processing power of current computers, Scratch supports new programming paradigms and activities that were previously infeasible, making it better positioned to succeed than previous attempts to introduce programming to youth. In the past, most initiatives to improve technological fluency have focused on school classrooms. But there is a growing recognition that after-school centers and other informal learning settings can play an important role, especially in economicallydisadvantaged communities, where schools typically have few technological resources and many young people are alienated from the formal education system. Our working hypothesis is that, as kids work on personally meaningful Scratch projects such as animated stories, games, and interactive art, they will develop technological fluency, mathematical and problem solving skills, and a justifiable selfconfidence that will serve them well in the wider spheres of their lives. During the past decade, more than 2000 community technology centers (CTCs) opened in the United States, specifically to provide better access to technology in economically-disadvantaged communities. But most CTCs support only the most basic computer activities such as word processing, email, and Web browsing, so participants do not gain the type of fluency described in the NRC report. Similarly, many after-school centers (which, unlike CTCs, focus exclusively on youth) have begun to introduce computers, but they too tend to offer only introductory computer activities, sometimes augmented by educational games.",
"title": ""
},
{
"docid": "6c018b35bf2172f239b2620abab8fd2f",
"text": "Cloud computing is quickly becoming the platform of choice for many web services. Virtualization is the key underlying technology enabling cloud providers to host services for a large number of customers. Unfortunately, virtualization software is large, complex, and has a considerable attack surface. As such, it is prone to bugs and vulnerabilities that a malicious virtual machine (VM) can exploit to attack or obstruct other VMs -- a major concern for organizations wishing to move to the cloud. In contrast to previous work on hardening or minimizing the virtualization software, we eliminate the hypervisor attack surface by enabling the guest VMs to run natively on the underlying hardware while maintaining the ability to run multiple VMs concurrently. Our NoHype system embodies four key ideas: (i) pre-allocation of processor cores and memory resources, (ii) use of virtualized I/O devices, (iii) minor modifications to the guest OS to perform all system discovery during bootup, and (iv) avoiding indirection by bringing the guest virtual machine in more direct contact with the underlying hardware. Hence, no hypervisor is needed to allocate resources dynamically, emulate I/O devices, support system discovery after bootup, or map interrupts and other identifiers. NoHype capitalizes on the unique use model in cloud computing, where customers specify resource requirements ahead of time and providers offer a suite of guest OS kernels. Our system supports multiple tenants and capabilities commonly found in hosted cloud infrastructures. Our prototype utilizes Xen 4.0 to prepare the environment for guest VMs, and a slightly modified version of Linux 2.6 for the guest OS. Our evaluation with both SPEC and Apache benchmarks shows a roughly 1% performance gain when running applications on NoHype compared to running them on top of Xen 4.0. Our security analysis shows that, while there are some minor limitations with cur- rent commodity hardware, NoHype is a significant advance in the security of cloud computing.",
"title": ""
},
{
"docid": "1ebb333d5a72c649cd7d7986f5bf6975",
"text": "\"Of what a strange nature is knowledge! It clings to the mind, when it has once seized on it, like a lichen on the rock,\" Abstract We describe a theoretical system intended to facilitate the use of knowledge In an understand ing system. The notion of script is introduced to account for knowledge about mundane situations. A program, SAM, is capable of using scripts to under stand. The notion of plans is introduced to ac count for general knowledge about novel situa tions. I. Preface In an attempt to provide theory where there have been mostly unrelated systems, Minsky (1974) recently described the as fitting into the notion of \"frames.\" Minsky at tempted to relate this work, in what is essentially language processing, to areas of vision research that conform to the same notion. Mlnsky's frames paper has created quite a stir in AI and some immediate spinoff research along the lines of developing frames manipulators (e.g. Bobrow, 1975; Winograd, 1975). We find that we agree with much of what Minsky said about frames and with his characterization of our own work. The frames idea is so general, however, that It does not lend itself to applications without further specialization. This paper is an attempt to devel op further the lines of thought set out in Schank (1975a) and Abelson (1973; 1975a). The ideas pre sented here can be viewed as a specialization of the frame idea. We shall refer to our central constructs as \"scripts.\" II. The Problem Researchers in natural language understanding have felt for some time that the eventual limit on the solution of our problem will be our ability to characterize world knowledge. Various researchers have approached world knowledge in various ways. Winograd (1972) dealt with the problem by severely restricting the world. This approach had the po sitive effect of producing a working system and the negative effect of producing one that was only minimally extendable. Charniak (1972) approached the problem from the other end entirely and has made some interesting first steps, but because his work is not grounded in any representational sys tem or any working computational system the res triction of world knowledge need not critically concern him. Our feeling is that an effective characteri zation of knowledge can result in a real under standing system in the not too distant future. We expect that programs based on the theory we out …",
"title": ""
},
{
"docid": "8a5bbfcb8084c0b331e18dcf64cdf915",
"text": "This paper describes wildcards, a new language construct designed to increase the flexibility of object-oriented type systems with parameterized classes. Based on the notion of use-site variance, wildcards provide a type safe abstraction over different instantiations of parameterized classes, by using '?' to denote unspecified type arguments. Thus they essentially unify the distinct families of classes often introduced by parametric polymorphism. Wildcards are implemented as part of the upcoming addition of generics to the Java™ programming language, and will thus be deployed world-wide as part of the reference implementation of the Java compiler javac available from Sun Microsystems, Inc. By providing a richer type system, wildcards allow for an improved type inference scheme for polymorphic method calls. Moreover, by means of a novel notion of wildcard capture, polymorphic methods can be used to give symbolic names to unspecified types, in a manner similar to the \"open\" construct known from existential types. Wildcards show up in numerous places in the Java Platform APIs of the upcoming release, and some of the examples in this paper are taken from these APIs.",
"title": ""
},
{
"docid": "1912f9ad509e446d3e34e3c6dccd4c78",
"text": "Lumbar disc herniation is a common male disease. In the past, More academic attention was directed to its relationship with lumbago and leg pain than to its association with andrological diseases. Studies show that central lumber intervertebral disc herniation may cause cauda equina injury and result in premature ejaculation, erectile dysfunction, chronic pelvic pain syndrome, priapism, and emission. This article presents an overview on the correlation between central lumbar intervertebral disc herniation and andrological diseases, focusing on the aspects of etiology, pathology, and clinical progress, hoping to invite more attention from andrological and osteological clinicians.",
"title": ""
},
{
"docid": "55b88b38dbde4d57fddb18d487099fc6",
"text": "The evaluation of algorithms and techniques to implement intrusion detection systems heavily rely on the existence of well designed datasets. In the last years, a lot of efforts have been done toward building these datasets. Yet, there is still room to improve. In this paper, a comprehensive review of existing datasets is first done, making emphasis on their main shortcomings. Then, we present a new dataset that is built with real traffic and up-to-date attacks. The main advantage of this dataset over previous ones is its usefulness for evaluating IDSs that consider long-term evolution and traffic periodicity. Models that consider differences in daytime/nighttime or weekdays/weekends can also be trained and evaluated with it. We discuss all the requirements for a modern IDS evaluation dataset and analyze how the one presented here meets the different needs. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f82a57baca9a0381c9b2af0368a5531e",
"text": "We tested the hypothesis derived from eye blink literature that when liars experience cognitive demand, their lies would be associated with a decrease in eye blinks, directly followed by an increase in eye blinks when the demand has ceased after the lie is told. A total of 13 liars and 13 truth tellers lied or told the truth in a target period; liars and truth tellers both told the truth in two baseline periods. Their eye blinks during the target and baseline periods and directly after the target period (target offset period) were recorded. The predicted pattern (compared to the baseline periods, a decrease in eye blinks during the target period and an increase in eye blinks during the target offset period) was found in liars and was strikingly different from the pattern obtained in truth tellers. They showed an increase in eye blinks during the target period compared to the baseline periods, whereas their pattern of eye blinks in the target offset period did not differ from baseline periods. The implications for lie detection are discussed.",
"title": ""
},
{
"docid": "e4a74019c34413f8ace000512ab26da0",
"text": "Scaling the transaction throughput of decentralized blockchain ledgers such as Bitcoin and Ethereum has been an ongoing challenge. Two-party duplex payment channels have been designed and used as building blocks to construct linked payment networks, which allow atomic and trust-free payments between parties without exhausting the resources of the blockchain.\n Once a payment channel, however, is depleted (e.g., because transactions were mostly unidirectional) the channel would need to be closed and re-funded to allow for new transactions. Users are envisioned to entertain multiple payment channels with different entities, and as such, instead of refunding a channel (which incurs costly on-chain transactions), a user should be able to leverage his existing channels to rebalance a poorly funded channel.\n To the best of our knowledge, we present the first solution that allows an arbitrary set of users in a payment channel network to securely rebalance their channels, according to the preferences of the channel owners. Except in the case of disputes (similar to conventional payment channels), our solution does not require on-chain transactions and therefore increases the scalability of existing blockchains. In our security analysis, we show that an honest participant cannot lose any of its funds while rebalancing. We finally provide a proof of concept implementation and evaluation for the Ethereum network.",
"title": ""
},
{
"docid": "fc3283b1d81de45772ec730c1f5185f1",
"text": "In this paper, three different techniques which can be used for control of three phase PWM Rectifier are discussed. Those three control techniques are Direct Power Control, Indirect Power Control or Voltage Oriented Control and Hysteresis Control. The main aim of this paper is to compare and establish the merits and demerits of each technique in various aspects mainly regarding switching frequency hence switching loss, computation and transient state behavior. Each control method is studied in detail and simulated using Matlab/Simulink in order to make the comparison.",
"title": ""
},
{
"docid": "ee045772d55000b6f2d3f7469a4161b1",
"text": "Although prior research has addressed the influence of corporate social responsibility (CSR) on perceived customer responses, it is not clear whether CSR affects market value of the firm. This study develops and tests a conceptual framework, which predicts that (1) customer satisfaction partially mediates the relationship between CSR and firm market value (i.e., Tobin’s q and stock return), (2) corporate abilities (innovativeness capability and product quality) moderate the financial returns to CSR, and (3) these moderated relationships are mediated by customer satisfaction. Based on a large-scale secondary dataset, the results show support for this framework. Interestingly, it is found that in firms with low innovativeness capability, CSR actually reduces customer satisfaction levels and, through the lowered satisfaction, harms market value. The uncovered mediated and asymmetrically moderated results offer important implications for marketing theory and practice. In today’s competitive market environment, corporate social responsibility (CSR) represents a high-profile notion that has strategic importance to many companies. As many as 90% of the Fortune 500 companies now have explicit CSR initiatives (Kotler and Lee 2004; Lichtenstein et al. 2004). According to a recent special report by BusinessWeek (2005a, p.72), large companies disclosed substantial investments in CSR initiatives (i.e., Target’s donation of $107.8 million in CSR represents 3.6% of its pretax profits, with GM $51.2 million at 2.7%, General Mills $60.3 million at 3.2%, Merck $921million at 11.3%, HCA $926 million at 43.3%). By dedicating everincreasing amounts to cash donations, in-kind contributions, cause marketing, and employee volunteerism programs, companies are acting on the premise that CSR is not merely the “right thing to do,” but also “the smart thing to do” (Smith 2003). Importantly, along with increasing media coverage of CSR issues, companies themselves are also taking direct and visible steps to communicate their CSR initiatives to various stakeholders including consumers. A decade ago, Drumwright (1996) observed that advertising with a social dimension was on the rise. The trend seems to continue. Many companies, including the likes of Target and Walmart, have funded large national ad campaigns promoting their good works. The October 2005 issue of In Style magazine alone carried more than 25 “cause” ads. Indeed, consumers seem to be taking notice: whereas in 1993 only 26% of individuals surveyed by Cone Communications could name a company as a strong corporate citizen, by 2004, the percentage surged to as high as 80% (BusinessWeek 2005a). Motivated, in part, by this mounting importance of CSR in practice, several marketing studies have found that social responsibility programs have a significant influence on a number of customer-related outcomes (Bhattacharya and Sen 2004). More specifically, based on lab experiments, CSR is reported to directly or indirectly impact consumer product responses",
"title": ""
},
{
"docid": "f9c938a98621f901c404d69a402647c7",
"text": "The growing popularity of virtual machines is pushing the demand for high performance communication between them. Past solutions have seen the use of hardware assistance, in the form of \"PCI passthrough\" (dedicating parts of physical NICs to each virtual machine) and even bouncing traffic through physical switches to handle data forwarding and replication.\n In this paper we show that, with a proper design, very high speed communication between virtual machines can be achieved completely in software. Our architecture, called VALE, implements a Virtual Local Ethernet that can be used by virtual machines, such as QEMU, KVM and others, as well as by regular processes. VALE achieves a throughput of over 17 million packets per second (Mpps) between host processes, and over 2 Mpps between QEMU instances, without any hardware assistance.\n VALE is available for both FreeBSD and Linux hosts, and is implemented as a kernel module that extends our recently proposed netmap framework, and uses similar techniques to achieve high packet rates.",
"title": ""
},
{
"docid": "16d2e0605d45c69302c71b8434b7a23a",
"text": "Emotions play an important role in human cognition, perception, decision making, and interaction. This paper presents a six-layer biologically inspired feedforward neural network to discriminate human emotions from EEG. The neural network comprises a shift register memory after spectral filtering for the input layer, and the estimation of coherence between each pair of input signals for the hidden layer. EEG data are collected from 57 healthy participants from eight locations while subjected to audio-visual stimuli. Discrimination of emotions from EEG is investigated based on valence and arousal levels. The accuracy of the proposed neural network is compared with various feature extraction methods and feedforward learning algorithms. The results showed that the highest accuracy is achieved when using the proposed neural network with a type of radial basis function.",
"title": ""
},
{
"docid": "a18da0c7d655fee44eebdf61c7371022",
"text": "This paper describes and compares a set of no-reference quality assessment algorithms for H.264/AVC encoded video sequences. These algorithms have in common a module that estimates the error due to lossy encoding of the video signals, using only information available on the compressed bitstream. In order to obtain perceived quality scores from the estimated error, three methods are presented: i) to weight the error estimates according to a perceptual model; ii) to linearly combine the mean squared error (MSE) estimates with additional video features; iii) to use MSE estimates as the input of a logistic function. The performances of the algorithms are evaluated using cross-validation procedures and the one showing the best performance is also in a preliminary study of quality assessment in the presence of transmission losses.",
"title": ""
},
{
"docid": "550e19033cb00938aed89eb3cce50a76",
"text": "This paper presents a high gain wide band 2×2 microstrip array antenna. The microstrip array antenna (MSA) is fabricated on inexpensive FR4 substrate and placed 1mm above ground plane to improve the bandwidth and efficiency of the antenna. A reactive impedance surface (RIS) consisting of 13×13 array of 4 mm square patches with inter-element spacing of 1 mm is fabricated on the bottom side of FR4 substrate. RIS reduces the coupling between the ground plane and MSA array and therefore increases the efficiency of antenna. It enhances the bandwidth and gain of the antenna. RIS also helps in reduction of SLL and cross polarization. This MSA array with RIS is place in a Fabry Perot cavity (FPC) resonator to enhance the gain of the antenna. 2×2 and 4×4 array of square parasitic patches are fed by MSA array fabricated on a FR4 superstrate which forms the partially reflecting surface of FPC. The FR4 superstrate layer is supported with help of dielectric rods at the edges with air at about λ0/2 from ground plane. A microstrip feed line network is designed and the printed MSA array is fed by a 50 Ω coaxial probe. The VSWR is <; 2 is obtained over 5.725-6.4 GHz, which covers 5.725-5.875 GHz ISM WLAN frequency band and 5.9-6.4 GHz satellite uplink C band. The antenna gain increases from 12 dB to 15.8 dB as 4×4 square parasitic patches are fabricated on superstrate layer. The gain variation is less than 2 dB over the entire band. The antenna structure provides SLL and cross polarization less than -2ο dB, front to back lobe ratio higher than 20 dB and more than 70 % antenna efficiency. A prototype structure is realized and tested. The measured results satisfy with the simulation results. The antenna can be a suitable candidate for access point, satellite communication, mobile base station antenna and terrestrial communication system.",
"title": ""
},
{
"docid": "1615e93f027c6f6f400ce1cc7a1bb8aa",
"text": "In the recent years, we have witnessed the rapid adoption of social media platforms, such as Twitter, Facebook and YouTube, and their use as part of the everyday life of billions of people worldwide. Given the habit of people to use these platforms to share thoughts, daily activities and experiences it is not surprising that the amount of user generated content has reached unprecedented levels, with a substantial part of that content being related to real-world events, i.e. actions or occurrences taking place at a certain time and location. Figure 1 illustrates three main categories of events along with characteristic photos from Flickr for each of them: a) news-related events, e.g. demonstrations, riots, public speeches, natural disasters, terrorist attacks, b) entertainment events, e.g. sports, music, live shows, exhibitions, festivals, and c) personal events, e.g. wedding, birthday, graduation ceremonies, vacations, and going out. Depending on the event, different types of multimedia and social media platform are more popular. For instance, news-related events are extensively published in the form of text updates, images and videos on Twitter and YouTube, entertainment and social events are often captured in the form of images and videos and shared on Flickr and YouTube, while personal events are mostly represented by images that are shared on Facebook and Instagram. Given the key role of events in our life, the task of annotating and organizing social media content around them is of crucial importance for ensuring real-time and future access to multimedia content about an event of interest. However, the vast amount of noisy and non-informative social media posts, in conjunction with their large scale, makes that task very challenging. For instance, in the case of popular events that are covered live on Twitter, there are often millions of posts referring to a single event, as in the case of the World Cup Final 2014 between Brazil and Germany, which produced approximately 32.1 million tweets with a rate of 618,725 tweets per minute. Processing, aggregating and selecting the most informative, entertaining and representative tweets among such a large dataset is a very challenging multimedia retrieval problem. In other",
"title": ""
},
{
"docid": "82fdd14f7766e8afe9b11a255073b3ce",
"text": "We develop a stochastic model of a simple protocol for the self-configuration of IP network interfaces. We describe the mean cost that incurs during a selfconfiguration phase and describe a trade-off between reliability and speed. We derive a cost function which we use to derive optimal parameters. We show that optimal cost and optimal reliability are qualities that cannot be achieved at the same time. Keywords—Embedded control software; IP; zeroconf protocol; cost optimisation",
"title": ""
},
{
"docid": "7a62e5a78eabbcbc567d5538a2f35434",
"text": "This paper presents a system for a design and implementation of Optical Arabic Braille Recognition(OBR) with voice and text conversion. The implemented algorithm based on a comparison of Braille dot position extraction in each cell with the database generated for each Braille cell. Many digital image processing have been performed on the Braille scanned document like binary conversion, edge detection, holes filling and finally image filtering before dot extraction. The work in this paper also involved a unique decimal code generation for each Braille cell used as a base for word reconstruction with the corresponding voice and text conversion database. The implemented algorithm achieve expected result through letter and words recognition and transcription accuracy over 99% and average processing time around 32.6 sec per page. using matlab environmemt",
"title": ""
}
] | scidocsrr |
9a1282ed6142beb775735c0ab8d54c2b | Anomalies in Intertemporal Choice : Evidence and an Interpretation | [
{
"docid": "e50d156bde3479c27119231073705f70",
"text": "The economic theory of the consumer is a combination of positive and normative theories. Since it is based on a rational maximizing model it describes how consumers should choose, but it is alleged to also describe how they do choose. This paper argues that in certain well-defined situations many consumers act in a manner that is inconsistent with economic theory. In these situations economic theory will make systematic errors in predicting behavior. Kahneman and Tversky's prospect theory is proposed as the basis for an alternative descriptive theory. Topics discussed are: underweighting of opportunity costs, failure to ignore sunk costs, search behavior, choosing not to choose and regret, and precommitment and self-control.",
"title": ""
}
] | [
{
"docid": "d0a2c8cf31e1d361a7c2b306dffddc25",
"text": "During the first years of the so called fourth industrial revolution, main attempts that tried to define the main ideas and tools behind this new era of manufacturing, always end up referring to the concept of smart machines that would be able to communicate with each and with the environment. In fact, the defined cyber physical systems, connected by the internet of things, take all the attention when referring to the new industry 4.0. But, nevertheless, the new industrial environment will benefit from several tools and applications that complement the real formation of a smart, embedded system that is able to perform autonomous tasks. And most of these revolutionary concepts rest in the same background theory as artificial intelligence does, where the analysis and filtration of huge amounts of incoming information from different types of sensors, assist to the interpretation and suggestion of the most recommended course of action. For that reason, artificial intelligence science suit perfectly with the challenges that arise in the consolidation of the fourth industrial revolution.",
"title": ""
},
{
"docid": "fac86557cbb42457ccec058699f47ff8",
"text": "As mobile apps become more closely integrated into our everyday lives, mobile app interactions ought to be rapid and responsive. Unfortunately, even the basic primitive of launching a mobile app is sorrowfully sluggish: 20 seconds of delay is not uncommon even for very popular apps.\n We have designed and built FALCON to remedy slow app launch. FALCON uses contexts such as user location and temporal access patterns to predict app launches before they occur. FALCON then provides systems support for effective app-specific prelaunching, which can dramatically reduce perceived delay.\n FALCON uses novel features derived through extensive data analysis, and a novel cost-benefit learning algorithm that has strong predictive performance and low runtime overhead. Trace-based analysis shows that an average user saves around 6 seconds per app startup time with daily energy cost of no more than 2% battery life, and on average gets content that is only 3 minutes old at launch without needing to wait for content to update. FALCON is implemented as an OS modification to the Windows Phone OS.",
"title": ""
},
{
"docid": "f74dd570fd04512dc82aac9d62930992",
"text": "A compact microstrip-line ultra-wideband (UWB) bandpass filter (BPF) using the proposed stub-loaded multiple-mode resonator (MMR) is presented. This MMR is formed by loading three open-ended stubs in shunt to a simple stepped-impedance resonator in center and two symmetrical locations, respectively. By properly adjusting the lengths of these stubs, the first four resonant modes of this MMR can be evenly allocated within the 3.1-to-10.6 GHz UWB band while the fifth resonant frequency is raised above 15.0GHz. It results in the formulation of a novel UWB BPF with compact-size and widened upper-stopband by incorporating this MMR with two interdigital parallel-coupled feed lines. Simulated and measured results are found in good agreement with each other, showing improved UWB bandpass behaviors with the insertion loss lower than 0.8dB, return loss higher than 14.3dB, and maximum group delay variation less than 0.64ns in the realized UWB passband",
"title": ""
},
{
"docid": "2d9921e49e58725c9c85da02249c8d27",
"text": "Recently, the performance of Si power devices gradually approaches the physical limit, and the latest SiC device seemingly has the ability to substitute the Si insulated gate bipolar transistor (IGBT) in 1200 V class. In this paper, we demonstrate the feasibility of further improving the Si IGBT based on the new concept of CSTBTtrade. In point of view of low turn-off loss and high uniformity in device characteristics, we employ the techniques of fine-pattern and retro grade doping in the design of new device structures, resulting in significant reduction on the turn-off loss and the VGE(th) distribution, respectively.",
"title": ""
},
{
"docid": "dcee2282ea923cc0e32ae3ddd602964d",
"text": "We describe an architecture that provides a programmable display layer in order to allow the execution of custom programs on consecutive display frames. This replaces the default display behavior of repeating application frames until an update is available. The architecture is implemented using a multi-GPU system. We will show three applications of the architecture typical to VR. First, smooth motion is provided by generating intermediate display frames by per-pixel depth-image warping using 3D motion fields. Smooth motion can be beneficial for walk-throughs of large scenes. Second, we implement fine-grained latency reduction at the display frame level using a synchronized prediction of simulation objects and the viewpoint. This improves the average quality and consistency of latency reduction. Third, a crosstalk reduction algorithm for consecutive display frames is implemented, which improves the quality of stereoscopic images.",
"title": ""
},
{
"docid": "6a0ac77c7471484e3829b7a561c78723",
"text": "While the growth of business-to-consumer electronic commerce seems phenomenal in recent years, several studies suggest that a large number of individuals using the Internet have serious privacy concerns, and that winning public trust is the primary hurdle to continued growth in e-commerce. This research investigated the relative importance, when purchasing goods and services over the Web, of four common trust indices (i.e. (1) third party privacy seals, (2) privacy statements, (3) third party security seals, and (4) security features). The results indicate consumers valued security features significantly more than the three other trust indices. We also investigated the relationship between these trust indices and the consumer’s perceptions of a marketer’s trustworthiness. The findings indicate that consumers’ ratings of trustworthiness of Web merchants did not parallel experts’ evaluation of sites’ use of the trust indices. This study also examined the extent to which consumers are willing to provide private information to electronic and land merchants. The results revealed that when making the decision to provide private information, consumers rely on their perceptions of trustworthiness irrespective of whether the merchant is electronic only or land and electronic. Finally, we investigated the relative importance of three types of Web attributes: security, privacy and pleasure features (convenience, ease of use, cosmetics). Privacy and security features were of lesser importance than pleasure features when considering consumers’ intention to purchase. A discussion of the implications of these results and an agenda for future research are provided. q 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "1de568efbb57cc4e5d5ffbbfaf8d39ae",
"text": "The Insider Threat Study, conducted by the U.S. Secret Service and Carnegie Mellon University’s Software Engineering Institute CERT Program, analyzed insider cyber crimes across U.S. critical infrastructure sectors. The study indicates that management decisions related to organizational and employee performance sometimes yield unintended consequences magnifying risk of insider attack. Lack of tools for understanding insider threat, analyzing risk mitigation alternatives, and communicating results exacerbates the problem. The goal of Carnegie Mellon University’s MERIT (Management and Education of the Risk of Insider Threat) project is to develop such tools. MERIT uses system dynamics to model and analyze insider threats and produce interactive learning environments. These tools can be used by policy makers, security officers, information technology, human resources, and management to understand the problem and assess risk from insiders based on simulations of policies, cultural, technical, and procedural factors. This paper describes the MERIT insider threat model and simulation results.",
"title": ""
},
{
"docid": "f042dd6b78c65541e657c48452a1e0e4",
"text": "We present a general framework for semantic role labeling. The framework combines a machine-learning technique with an integer linear programming-based inference procedure, which incorporates linguistic and structural constraints into a global decision process. Within this framework, we study the role of syntactic parsing information in semantic role labeling. We show that full syntactic parsing information is, by far, most relevant in identifying the argument, especially, in the very first stagethe pruning stage. Surprisingly, the quality of the pruning stage cannot be solely determined based on its recall and precision. Instead, it depends on the characteristics of the output candidates that determine the difficulty of the downstream problems. Motivated by this observation, we propose an effective and simple approach of combining different semantic role labeling systems through joint inference, which significantly improves its performance. Our system has been evaluated in the CoNLL-2005 shared task on semantic role labeling, and achieves the highest F1 score among 19 participants.",
"title": ""
},
{
"docid": "bb72e4d6f967fb88473756cdcbb04252",
"text": "GF (Grammatical Framework) is a grammar formalism based on the distinction between abstract and concrete syntax. An abstract syntax is a free algebra of trees, and a concrete syntax is a mapping from trees to nested records of strings and features. These mappings are naturally defined as functions in a functional programming language; the GF language provides the customary functional programming constructs such as algebraic data types, pattern matching, and higher-order functions, which enable productive grammar writing and linguistic generalizations. Given the seemingly transformational power of the GF language, its computational properties are not obvious. However, all grammars written in GF can be compiled into a simple and austere core language, Canonical GF (CGF). CGF is well suited for implementing parsing and generation with grammars, as well as for proving properties of GF. This paper gives a concise description of both the core and the source language, the algorithm used in compiling GF to CGF, and some back-end optimizations on CGF.",
"title": ""
},
{
"docid": "e27b61e4683f2474839e75fe1caf7b49",
"text": "A novel multi-purpose integrated planar six-port front-end circuit combining both substrate integrated waveguide (SIW) technology and integrated loads is presented and demonstrated. The use of SIW technology allows a very compact circuit and very low radiation loss at millimeter frequencies. An integrated load is used to simplify the fabrication process and also reduce dimensions and cost. To validate the proposed concept, an integrated broadband six-port front-end circuit prototype was fabricated and measured. Simulation and measurement results show that the proposed six-port circuit can easily operate at 24 GHz for radar systems and also over 23–29 GHz for broadband millimetre-wave radio services.",
"title": ""
},
{
"docid": "b74ee9d63787d93411a4b37e4ed6882d",
"text": "We introduce Visual Sedimentation, a novel design metaphor for visualizing data streams directly inspired by the physical process of sedimentation. Visualizing data streams (e. g., Tweets, RSS, Emails) is challenging as incoming data arrive at unpredictable rates and have to remain readable. For data streams, clearly expressing chronological order while avoiding clutter, and keeping aging data visible, are important. The metaphor is drawn from the real-world sedimentation processes: objects fall due to gravity, and aggregate into strata over time. Inspired by this metaphor, data is visually depicted as falling objects using a force model to land on a surface, aggregating into strata over time. In this paper, we discuss how this metaphor addresses the specific challenge of smoothing the transition between incoming and aging data. We describe the metaphor's design space, a toolkit developed to facilitate its implementation, and example applications to a range of case studies. We then explore the generative capabilities of the design space through our toolkit. We finally illustrate creative extensions of the metaphor when applied to real streams of data.",
"title": ""
},
{
"docid": "68b8dd0fd648b9ad862554795935de45",
"text": "Feedforward neural networks (FFNN) have been utilised for various research in machine learning and they have gained a significantly wide acceptance. However, it was recently noted that the feedforward neural network has been functioning slower than needed. As a result, it has created critical bottlenecks among its applications. Extreme Learning Machines (ELM) were suggested as alternative learning algorithms instead of FFNN. The former is characterised by single-hidden layer feedforward neural networks (SLFN). It selects hidden nodes randomly and analytically determines their output weight. This review aims to, first, present a short mathematical explanation to explain the basic ELM. Second, because of its notable simplicity, efficiency, and remarkable generalisation performance, ELM has had wide uses in various domains, such as computer vision, biomedical engineering, control and robotics, system identification, etc. Thus, in this review, we will aim to present a complete view of these ELM advances for different applications. Finally, ELM’s strengths and weakness will be presented, along with its future perspectives.",
"title": ""
},
{
"docid": "80336a3bba9c0d7fd692b1321c0739f6",
"text": "Fine-grained image classification is to recognize hundreds of subcategories in each basic-level category. Existing methods employ discriminative localization to find the key distinctions among similar subcategories. However, existing methods generally have two limitations: (1) Discriminative localization relies on region proposal methods to hypothesize the locations of discriminative regions, which are time-consuming and the bottleneck of classification speed. (2) The training of discriminative localization depends on object or part annotations, which are heavily labor-consuming and the obstacle of marching towards practical application. It is highly challenging to address the two key limitations simultaneously, and existing methods only focus on one of them. Therefore, we propose a weakly supervised discriminative localization approach (WSDL) for fast fine-grained image classification to address the two limitations at the same time, and its main advantages are: (1) n-pathway end-to-end discriminative localization network is designed to improve classification speed, which simultaneously localizes multiple different discriminative regions for one image to boost classification accuracy, and shares full-image convolutional features generated by region proposal network to accelerate the process of generating region proposals as well as reduce the computation of convolutional operation. (2) Multi-level attention guided localization learning is proposed to localize discriminative regions with different focuses automatically, without using object and part annotations, avoiding the labor consumption. Different level attentions focus on different characteristics of the image, which are complementary and boost the classification accuracy. Both are jointly employed to simultaneously improve classification speed and eliminate dependence on object and part annotations. Compared with state-of-theart methods on 2 widely-used fine-grained image classification datasets, our WSDL approach achieves both the best accuracy and efficiency of classification.",
"title": ""
},
{
"docid": "3c014205609a8bbc2f5e216d7af30b32",
"text": "This paper proposes a novel design for variable-flux machines with Alnico magnets. The proposed design uses tangentially magnetized magnets to achieve high air-gap flux density and to avoid demagnetization by the armature field. Barriers are also inserted in the rotor to limit the armature flux and to allow the machine to utilize both reluctance and magnet torque components. An analytical procedure is first applied to obtain the initial machine design parameters. Then, several modifications are applied to the stator and rotor designs through finite-element analysis (FEA) simulations to improve machine efficiency and torque density. A prototype of the proposed design is built, and the experimental results are in good correlation with the FEA simulations, confirming the validity of the proposed machine design concept.",
"title": ""
},
{
"docid": "8d49e37ab80dae285dbf694ba1849f68",
"text": "In this paper we present a reference architecture for ETL stages of EDM and LA that works with different data formats and different extraction sites, ensuring privacy and making easier for new participants to enter into the process without demanding more computing power. Considering scenarios with a multitude of virtual environments hosting educational activities, accessible through a common infrastructure, we devised a reference model where data generated from interaction between users and among users and the environment itself, are selected, organized and stored in local “baskets”. Local baskets are then collected and grouped in a global basket. Organization resources like item modeling are used in both levels of basket construction. Using this reference upon a client-server architectural style, a reference architecture was developed and has been used to carry out a project for an official foundation linked to Brazilian Ministry of Education, involving educational data mining and sharing of 100+ higher education institutions and their respective virtual environments. In this architecture, a client-collector inside each virtual environment collects information from database and event logs. This information along with definitions obtained from item models are used to build local baskets. A synchronization protocol keeps all item models synced with client-collectors and server-collectors generating global baskets. This approach has shown improvements on ETL like: parallel processing of items, economy on storage space and bandwidth, privacy assurance, better tenacity, and good scalability.",
"title": ""
},
{
"docid": "f4e98796feefcceb86a94f978a21e5ab",
"text": "This tutorial provides a brief overview of space-time adaptive processing (STAP) for radar applications. We discuss space-time signal diversity and various forms of the adaptive processor, including reduced-dimension and reduced-rank STAP approaches. Additionally, we describe the space-time properties of ground clutter and noise-jamming, as well as essential STAP performance metrics. We conclude this tutorial with an overview of some current STAP topics: space-based radar, bistatic STAP, knowledge-aided STAP, multi-channel synthetic aperture radar and non-sidelooking array configurations.",
"title": ""
},
{
"docid": "cb4cc56b013ca35250c4d966da843d58",
"text": "Cyber-Physical System (CPS) is a system of system which integrates physical system with cyber capability in order to improve the physical performance. It is being widely used in areas closely related to national economy and people's livelihood, therefore CPS security problems have drawn a global attention and an appropriate risk assessment for CPS is in urgent need. Existing risk assessment for CPS always focuses on the reliability assessment, using Probability Risk Assessment (PRA). In this way, the assessment of physical part and cyber part is isolated as PRA is difficult to quantify the risks from the cyber world. Methodologies should be developed to assess the both parts as a whole system, considering this integrated system has a high coupling between the physical layer and cyber layer. In this paper, a risk assessment idea for CPS with the use of attack tree is proposed. Firstly, it presents a detailed description about the threat and vulnerability attributes of each leaf in an attack tree and tells how to assign value to its threat and vulnerability vector. Then this paper focuses on calculating the threat and vulnerability vector of an attack path with the use of the leaf vector values. Finally, damage is taken into account and an idea to calculate the risk value of the whole attack path is given.",
"title": ""
},
{
"docid": "3b1a7539000a8ddabdaa4888b8bb1adc",
"text": "This paper presents evaluations among the most usual maximum power point tracking (MPPT) techniques, doing meaningful comparisons with respect to the amount of energy extracted from the photovoltaic (PV) panel [tracking factor (TF)] in relation to the available power, PV voltage ripple, dynamic response, and use of sensors. Using MatLab/Simulink and dSPACE platforms, a digitally controlled boost dc-dc converter was implemented and connected to an Agilent Solar Array E4350B simulator in order to verify the analytical procedures. The main experimental results are presented for conventional MPPT algorithms and improved MPPT algorithms named IC based on proportional-integral (PI) and perturb and observe based on PI. Moreover, the dynamic response and the TF are also evaluated using a user-friendly interface, which is capable of online program power profiles and computes the TF. Finally, a typical daily insulation is used in order to verify the experimental results for the main PV MPPT methods.",
"title": ""
},
{
"docid": "d449a4d183c2a3e1905935f624d684d3",
"text": "This paper introduces the approach CBRDIA (Case-based Reasoning for Document Invoice Analysis) which uses the principles of case-based reasoning to analyze, recognize and interpret invoices. Two CBR cycles are performed sequentially in CBRDIA. The first one consists in checking whether a similar document has already been processed, which makes the interpretation of the current one easy. The second cycle works if the first one fails. It processes the document by analyzing and interpreting its structuring elements (adresses, amounts, tables, etc) one by one. The CBR cycles allow processing documents from both knonwn or unknown classes. Applied on 923 invoices, CBRDIA reaches a recognition rate of 85,22% for documents of known classes and 74,90% for documents of unknown classes.",
"title": ""
},
{
"docid": "dd2819d0413a1d41c602aef4830888a4",
"text": "Presented here is a fast method that combines curve matching techniques with a surface matching algorithm to estimate the positioning and respective matching error for the joining of three-dimensional fragmented objects. Furthermore, this paper describes how multiple joints are evaluated and how the broken artefacts are clustered and transformed to form potential solutions of the assemblage problem. q 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] | scidocsrr |
ebba225894ba7ed1352745abc47dd099 | A SLIM WIDEBAND AND CONFORMAL UHF RFID TAG ANTENNA BASED ON U-SHAPED SLOTS FOR METALLIC OBJECTS | [
{
"docid": "48ea1d793f0ae2b79f406c87fe5980b5",
"text": "In this paper, we describe a UHF radio-frequency-identification tag test and measurement system based on National Instruments LabVIEW-controlled PXI RF hardware. The system operates in 800-1000-MHz frequency band with a variable output power up to 30 dBm and is capable of testing tags using Gen2 and other protocols. We explain testing methods and metrics, describe in detail the construction of our system, show its operation with real tag measurement examples, and draw general conclusions.",
"title": ""
}
] | [
{
"docid": "44bb8c5202edadc2f14fa27c0fbb9705",
"text": "In this paper, a new Near Field Communication (NFC) antenna solution that can be used for portable devices with metal back cover is proposed. In particular, there are two holes on metal back cover, a slit between the two holes, and antenna coil located behind the metal cover. With such an arrangement, the shielding effect of the metal cover can be totally eliminated. Simulated and measured results of the proposed antenna are presented.",
"title": ""
},
{
"docid": "abc1be23f803390c2aadd58059eb177e",
"text": "In the atomic force microscope (AFM) scanning system, the piezoscanner is significant in realizing high-performance tasks. To cater to this demand, a novel compliant two-degrees-of-freedom (2-DOF) micro-/nanopositioning stage with modified lever displacement amplifiers is proposed in this paper, which can be selected to work in dual modes. Moreover, the modified double four-bar P (P denotes prismatic) joints are adopted in designing the flexible limbs. The established models for the mechanical performance evaluation in terms of kinetostatics, dynamics, and workspace are validated by finite-element analysis. After a series of dimension optimizations carried out via particle swarm optimization algorithm, a novel active disturbance rejection controller, including the components of nonlinearity tracking differentiator, extended state observer, and nonlinear state error feedback, is designed for automatically estimating and suppressing the plant uncertainties arising from the hysteresis nonlinearity, creep effect, sensor noises, and other unknown disturbances. The closed-loop control results based on simulation and prototype indicate that the two working natural frequencies of the proposed stage are approximated to be 805.19 and 811.31 Hz, the amplification ratio in two axes is about 4.2, and the workspace is around 120 ×120 μm2, while the cross-coupling between the two axes is kept within 2%. All of the results indicate that the developed micro-/nanopositioning system has a good property for high-performance AFM scanning.",
"title": ""
},
{
"docid": "d1041afcb50a490034740add2cce3f0d",
"text": "Inverse synthetic aperture radar imaging of moving targets with a stepped frequency waveform presents unique challenges. Intra-step target motion introduces phase discontinuities between frequency bands, which in turn produce degraded range side lobes. Frequency stitching of the stepped-frequency waveform to emulate a contiguous bandwidth can dramatically reduce the effective pulse repetition frequency, which then may impact the maximize target size that can be unambiguously measured and imaged via ISAR. This paper analyzes these effects and validates results via simulated data.",
"title": ""
},
{
"docid": "be7d32aeffecc53c5d844a8f90cd5ce0",
"text": "Wordnets play a central role in many natural language processing tasks. This paper introduces a multilingual editing system for the Open Multilingual Wordnet (OMW: Bond and Foster, 2013). Wordnet development, like most lexicographic tasks, is slow and expensive. Moving away from the original Princeton Wordnet (Fellbaum, 1998) development workflow, wordnet creation and expansion has increasingly been shifting towards an automated and/or interactive system facilitated task. In the particular case of human edition/expansion of wordnets, a few systems have been developed to aid the lexicographers’ work. Unfortunately, most of these tools have either restricted licenses, or have been designed with a particular language in mind. We present a webbased system that is capable of multilingual browsing and editing for any of the hundreds of languages made available by the OMW. All tools and guidelines are freely available under an open license.",
"title": ""
},
{
"docid": "0e002aae88332f8143e6f3a19c4c578b",
"text": "While attachment research has demonstrated that parents' internal working models of attachment relationships tend to be transmitted to their children, affecting children's developmental trajectories, this study specifically examines associations between adult attachment status and observable parent, child, and dyadic behaviors among children with autism and associated neurodevelopmental disorders of relating and communicating. The Adult Attachment Interview (AAI) was employed to derive parental working models of attachment relationships. The Functional Emotional Assessment Scale (FEAS) was used to determine the quality of relational and functional behaviors in parents and their children. The sample included parents and their 4- to 16-year-old children with autism and associated neurodevelopmental disorders. Hypothesized relationships between AAI classifications and FEAS scores were supported. Significant correlations were found between AAI classification and FEAS scores, indicating that children with autism spectrum disorders whose parents demonstrated secure attachment representations were better able to initiate and respond in two-way pre-symbolic gestural communication; organize two-way social problem-solving communication; and engage in imaginative thinking, symbolic play, and verbal communication. These findings lend support to the relevance of the parent's state of mind pertaining to attachment status to child and parent relational behavior in cases wherein the child has been diagnosed with autism or an associated neurodevelopmental disorder of relating and communicating. A model emerges from these findings of conceptualizing relationships between parental internal models of attachment relationships and parent-child relational and functional levels that may aid in differentiating interventions.",
"title": ""
},
{
"docid": "37adbe33e4d83794fa85e7155a3e51d4",
"text": "Information technology matters to business success because it directly affects the mechanisms through which they create and capture value to earn a profit: IT is thus integral to a firm’s business-level strategy. Much of the extant research on the IT/strategy relationship, however, inaccurately frames IT as only a functionallevel strategy. This widespread under-appreciation of the business-level role of IT indicates a need for substantial retheorizing of its role in strategy and its complex and interdependent relationship with the mechanisms through which firms generate profit. Using a comprehensive framework of potential profit mechanisms, we argue that while IT activities remain integral to the functional-level strategies of the firm, they also play several significant roles in business strategy, with substantial performance implications. IT affects industry structure and the set of business-level strategic alternatives and value-creation opportunities that a firm may pursue. Along with complementary organizational changes, IT both enhances the firm’s current (ordinary) capabilities and enables new (dynamic) capabilities, including the flexibility to focus on rapidly changing opportunities or to abandon losing initiatives while salvaging substantial asset value. Such digitally attributable capabilities also determine how much of this value, once created, can be captured by the firm—and how much will be dissipated through competition or through the power of value chain partners, the governance of which itself depends on IT. We explore these business-level strategic roles of IT and discuss several provocative implications and future research directions in the converging information systems and strategy domains.",
"title": ""
},
{
"docid": "14fac04f802367a56a03fcdce88044f8",
"text": "Humidity measurement is one of the most significant issues in various areas of applications such as instrumentation, automated systems, agriculture, climatology and GIS. Numerous sorts of humidity sensors fabricated and developed for industrial and laboratory applications are reviewed and presented in this article. The survey frequently concentrates on the RH sensors based upon their organic and inorganic functional materials, e.g., porous ceramics (semiconductors), polymers, ceramic/polymer and electrolytes, as well as conduction mechanism and fabrication technologies. A significant aim of this review is to provide a distinct categorization pursuant to state of the art humidity sensor types, principles of work, sensing substances, transduction mechanisms, and production technologies. Furthermore, performance characteristics of the different humidity sensors such as electrical and statistical data will be detailed and gives an added value to the report. By comparison of overall prospects of the sensors it was revealed that there are still drawbacks as to efficiency of sensing elements and conduction values. The flexibility offered by thick film and thin film processes either in the preparation of materials or in the choice of shape and size of the sensor structure provides advantages over other technologies. These ceramic sensors show faster response than other types.",
"title": ""
},
{
"docid": "f4271386b02994f33a5eae3c6c67a879",
"text": "Joint FAO/WHO expert's consultation report defines probiotics as: Live microorganisms which when administered in adequate amounts confer a health benefit on the host. Most commonly used probiotics are Lactic acid bacteria (LAB) and bifidobacteria. There are other examples of species used as probiotics (certain yeasts and bacilli). Probiotic supplements are popular now a days. From the beginning of 2000, research on probiotics has increased remarkably. Probiotics are now day's widely studied for their beneficial effects in treatment of many prevailing diseases. Here we reviewed the beneficiary effects of probiotics in some diseases.",
"title": ""
},
{
"docid": "d03a86459dd461dcfac842ae55ae4ebb",
"text": "Convolutional networks are the de-facto standard for analyzing spatio-temporal data such as images, videos, and 3D shapes. Whilst some of this data is naturally dense (e.g., photos), many other data sources are inherently sparse. Examples include 3D point clouds that were obtained using a LiDAR scanner or RGB-D camera. Standard \"dense\" implementations of convolutional networks are very inefficient when applied on such sparse data. We introduce new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and use them to develop spatially-sparse convolutional networks. We demonstrate the strong performance of the resulting models, called submanifold sparse convolutional networks (SS-CNs), on two tasks involving semantic segmentation of 3D point clouds. In particular, our models outperform all prior state-of-the-art on the test set of a recent semantic segmentation competition.",
"title": ""
},
{
"docid": "e65c5458a27fc5367be4fd6024e8eb43",
"text": "The aims of this article are to review low-voltage vs high-voltage electrical burn complications in adults and to identify novel areas that are not recognized to improve outcomes. An extensive literature search on electrical burn injuries was performed using OVID MEDLINE, PubMed, and EMBASE databases from 1946 to 2015. Studies relating to outcomes of electrical injury in the adult population (≥18 years of age) were included in the study. Forty-one single-institution publications with a total of 5485 electrical injury patients were identified and included in the present study. Fourty-four percent of these patients were low-voltage injuries (LVIs), 38.3% high-voltage injuries (HVIs), and 43.7% with voltage not otherwise specified. Forty-four percentage of studies did not characterize outcomes according to LHIs vs HVIs. Reported outcomes include surgical, medical, posttraumatic, and others (long-term/psychological/rehabilitative), all of which report greater incidence rates in HVI than in LVI. Only two studies report on psychological outcomes such as posttraumatic stress disorder. Mortality rates from electrical injuries are 2.6% in LVI, 5.2% in HVI, and 3.7% in not otherwise specified. Coroner's reports revealed a ratio of 2.4:1 for deaths caused by LVI compared with HVI. HVIs lead to greater morbidity and mortality than LVIs. However, the results of the coroner's reports suggest that immediate mortality from LVI may be underestimated. Furthermore, on the basis of this analysis, we conclude that the majority of studies report electrical injury outcomes; however, the majority of them do not analyze complications by low vs high voltage and often lack long-term psychological and rehabilitation outcomes after electrical injury indicating that a variety of central aspects are not being evaluated or assessed.",
"title": ""
},
{
"docid": "5ee490a307a0b6108701225170690386",
"text": "An ink dating method based on solvent analysis was recently developed using thermal desorption followed by gas chromatography/mass spectrometry (GC/MS) and is currently implemented in several forensic laboratories. The main aims of this work were to implement this method in a new laboratory to evaluate whether results were comparable at three levels: (i) validation criteria, (ii) aging curves, and (iii) results interpretation. While the results were indeed comparable in terms of validation, the method proved to be very sensitive to maintenances. Moreover, the aging curves were influenced by ink composition, as well as storage conditions (particularly when the samples were not stored in \"normal\" room conditions). Finally, as current interpretation models showed limitations, an alternative model based on slope calculation was proposed. However, in the future, a probabilistic approach may represent a better solution to deal with ink sample inhomogeneity.",
"title": ""
},
{
"docid": "e325351fd8eda7ebebd46df0d0a80c19",
"text": "This paper proposes a CLL resonant dc-dc converter as an option for offline applications. This topology can achieve zero-voltage switching from zero load to a full load and zero-current switching for output rectifiers and makes the implementation of a secondary rectifier easy. This paper also presents a novel methodology for designing CLL resonant converters based on efficiency and holdup time requirements. An optimal transformer structure is proposed, which uses a current-type synchronous rectifier (SR) drive scheme. An 800-kHz 250-W CLL resonant converter prototype is built to verify the proposed circuit, design method, transformer structure, and SR drive scheme.",
"title": ""
},
{
"docid": "1d0dbfe15768703f7d5a1a56bbee3cac",
"text": "This paper investigates the effect of non-audit services on audit quality. Following the announcement of the requirement to disclose non-audit fees, approximately one-third of UK quoted companies disclosed before the requirement became effective. Whilst distressed companies were more likely to disclose early, auditor size, directors’ shareholdings and non-audit fees were not signi cantly correlated with early disclosure. These results cast doubt on the view that voluntary disclosure of non-audit fees was used to signal audit quality. The evidence also indicates a positive weakly signi cant relationship between disclosed non-audit fees and audit quali cations. This suggests that when non-audit fees are disclosed, the provision of non-audit services does not reduce audit quality.",
"title": ""
},
{
"docid": "4ecf150613d45ae0f92485b8faa0deef",
"text": "Query optimizers in current database systems are designed to pick a single efficient plan for a given query based on current statistical properties of the data. However, different subsets of the data can sometimes have very different statistical properties. In such scenarios it can be more efficient to process different subsets of the data for a query using different plans. We propose a new query processing technique called content-based routing (CBR) that eliminates the single-plan restriction in current systems. We present low-overhead adaptive algorithms that partition input data based on statistical properties relevant to query execution strategies, and efficiently route individual tuples through customized plans based on their partition. We have implemented CBR as an extension to the Eddies query processor in the TelegraphCQ system, and we present an extensive experimental evaluation showing the significant performance benefits of CBR.",
"title": ""
},
{
"docid": "63339fb80c01c38911994cd326e483a3",
"text": "Older adults are becoming a significant percentage of the world's population. A multitude of factors, from the normal aging process to the progression of chronic disease, influence the nutrition needs of this very diverse group of people. Appropriate micronutrient intake is of particular importance but is often suboptimal. Here we review the available data regarding micronutrient needs and the consequences of deficiencies in the ever growing aged population.",
"title": ""
},
{
"docid": "9794653cc79a0835851fdc890e908823",
"text": "In 1988, Hickerson proved the celebrated “mock theta conjectures”, a collection of ten identities from Ramanujan’s “lost notebook” which express certain modular forms as linear combinations of mock theta functions. In the context of Maass forms, these identities arise from the peculiar phenomenon that two different harmonic Maass forms may have the same non-holomorphic parts. Using this perspective, we construct several infinite families of modular forms which are differences of mock theta functions.",
"title": ""
},
{
"docid": "c4a74726ac56b0127e5920098e6f0258",
"text": "BACKGROUND\nAttention Deficit Hyperactivity disorder (ADHD) is one of the most common and challenging childhood neurobehavioral disorders. ADHD is known to negatively impact children, their families, and their community. About one-third to one-half of patients with ADHD will have persistent symptoms into adulthood. The prevalence in the United States is estimated at 5-11%, representing 6.4 million children nationwide. The variability in the prevalence of ADHD worldwide and within the US may be due to the wide range of factors that affect accurate assessment of children and youth. Because of these obstacles to assessment, ADHD is under-diagnosed, misdiagnosed, and undertreated.\n\n\nOBJECTIVES\nWe examined factors associated with making and receiving the diagnosis of ADHD. We sought to review the consequences of a lack of diagnosis and treatment for ADHD on children's and adolescent's lives and how their families and the community may be involved in these consequences.\n\n\nMETHODS\nWe reviewed scientific articles looking for factors that impact the identification and diagnosis of ADHD and articles that demonstrate naturalistic outcomes of diagnosis and treatment. The data bases PubMed and Google scholar were searched from the year 1995 to 2015 using the search terms \"ADHD, diagnosis, outcomes.\" We then reviewed abstracts and reference lists within those articles to rule out or rule in these or other articles.\n\n\nRESULTS\nMultiple factors have significant impact in the identification and diagnosis of ADHD including parents, healthcare providers, teachers, and aspects of the environment. Only a few studies detailed the impact of not diagnosing ADHD, with unclear consequences independent of treatment. A more significant number of studies have examined the impact of untreated ADHD. The experience around receiving a diagnosis described by individuals with ADHD provides some additional insights.\n\n\nCONCLUSION\nADHD diagnosis is influenced by perceptions of many different members of a child's community. A lack of clear understanding of ADHD and the importance of its diagnosis and treatment still exists among many members of the community including parents, teachers, and healthcare providers. More basic and clinical research will improve methods of diagnosis and information dissemination. Even before further advancements in science, strong partnerships between clinicians and patients with ADHD may be the best way to reduce the negative impacts of this disorder.",
"title": ""
},
{
"docid": "87ac799402c785e68db14636b0725523",
"text": "One of the challenges of creating applications from confederations of Internet-enabled things is the complexity of having to deal with spontaneously interacting and partially available heterogeneous devices. In this paper we describe the features of the MAGIC Broker 2 (MB2) a platform designed to offer a simple and consistent programming interface for collections of things. We report on the key abstractions offered by the platform and report on its use for developing two IoT applications involving spontaneous device interaction: 1) mobile phones and public displays, and 2) a web-based sensor actuator network portal called Sense Tecnic (STS). We discuss how the MB2 abstractions and implementation have evolved over time to the current design. Finally we present a preliminary performance evaluation and report qualitatively on the developers' experience of using our platform.",
"title": ""
},
{
"docid": "33cab0ec47af5e40d64e34f8ffc7dd6f",
"text": "This inaugural article has a twofold purpose: (i) to present a simpler and more general justification of the fundamental scaling laws of quasibrittle fracture, bridging the asymptotic behaviors of plasticity, linear elastic fracture mechanics, and Weibull statistical theory of brittle failure, and (ii) to give a broad but succinct overview of various applications and ramifications covering many fields, many kinds of quasibrittle materials, and many scales (from 10(-8) to 10(6) m). The justification rests on developing a method to combine dimensional analysis of cohesive fracture with second-order accurate asymptotic matching. This method exploits the recently established general asymptotic properties of the cohesive crack model and nonlocal Weibull statistical model. The key idea is to select the dimensionless variables in such a way that, in each asymptotic case, all of them vanish except one. The minimal nature of the hypotheses made explains the surprisingly broad applicability of the scaling laws.",
"title": ""
},
{
"docid": "377e9bfebd979c25728fdede2af74335",
"text": "Youth Gangs: An Overview, the initial Bulletin in this series, brings together available knowledge on youth gangs by reviewing data and research. The author begins with a look at the history of youth gangs and their demographic characteristics. He then assesses the scope of the youth gang problem, including gang problems in juvenile detention and correctional facilities. A review of gang studies provides a clearer understanding of several issues. An extensive list of references is also included for further review.",
"title": ""
}
] | scidocsrr |
d44ed5c436ff5cec861c3e49d122fab2 | Design space exploration of FPGA accelerators for convolutional neural networks | [
{
"docid": "5c8c391a10f32069849d743abc5e8210",
"text": "We present a massively parallel coprocessor for accelerating Convolutional Neural Networks (CNNs), a class of important machine learning algorithms. The coprocessor functional units, consisting of parallel 2D convolution primitives and programmable units performing sub-sampling and non-linear functions specific to CNNs, implement a “meta-operator” to which a CNN may be compiled to. The coprocessor is serviced by distributed off-chip memory banks with large data bandwidth. As a key feature, we use low precision data and further increase the effective memory bandwidth by packing multiple words in every memory operation, and leverage the algorithm’s simple data access patterns to use off-chip memory as a scratchpad for intermediate data, critical for CNNs. A CNN is mapped to the coprocessor hardware primitives with instructions to transfer data between the memory and coprocessor. We have implemented a prototype of the CNN coprocessor on an off-the-shelf PCI FPGA card with a single Xilinx Virtex5 LX330T FPGA and 4 DDR2 memory banks totaling 1GB. The coprocessor prototype can process at the rate of 3.4 billion multiply accumulates per second (GMACs) for CNN forward propagation, a speed that is 31x faster than a software implementation on a 2.2 GHz AMD Opteron processor. For a complete face recognition application with the CNN on the coprocessor and the rest of the image processing tasks on the host, the prototype is 6-10x faster, depending on the host-coprocessor bandwidth.",
"title": ""
}
] | [
{
"docid": "0939a703cb2eeb9396c4e681f95e1e4d",
"text": "Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors. To remedy the rigidity and annotation burden of standard approaches, we address the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly. We propose guided networks, which extract a latent task representation from any amount of supervision, and optimize our architecture end-to-end for fast, accurate few-shot segmentation. Our method can switch tasks without further optimization and quickly update when given more guidance. We report the first results for segmentation from one pixel per concept and show real-time interactive video segmentation. Our unified approach propagates pixel annotations across space for interactive segmentation, across time for video segmentation, and across scenes for semantic segmentation. Our guided segmentor is state-of-the-art in accuracy for the amount of annotation and time. See http://github.com/shelhamer/revolver for code, models, and more details.",
"title": ""
},
{
"docid": "8f29a231b801a018a6d18befc0d06d0b",
"text": "The paper introduces a deep learningbased Twitter hate-speech text classification system. The classifier assigns each tweet to one of four predefined categories: racism, sexism, both (racism and sexism) and non-hate-speech. Four Convolutional Neural Network models were trained on resp. character 4-grams, word vectors based on semantic information built using word2vec, randomly generated word vectors, and word vectors combined with character n-grams. The feature set was down-sized in the networks by maxpooling, and a softmax function used to classify tweets. Tested by 10-fold crossvalidation, the model based on word2vec embeddings performed best, with higher precision than recall, and a 78.3% F-score.",
"title": ""
},
{
"docid": "9b60816097ccdff7b1eec177aac0b9b8",
"text": "We introduce a neural network that represents sentences by composing their words according to induced binary parse trees. We use Tree-LSTM as our composition function, applied along a tree structure found by a fully differentiable natural language chart parser. Our model simultaneously optimises both the composition function and the parser, thus eliminating the need for externally-provided parse trees which are normally required for Tree-LSTM. It can therefore be seen as a tree-based RNN that is unsupervised with respect to the parse trees. As it is fully differentiable, our model is easily trained with an off-the-shelf gradient descent method and backpropagation. We demonstrate that it achieves better performance compared to various supervised Tree-LSTM architectures on a textual entailment task and a reverse dictionary task.",
"title": ""
},
{
"docid": "2e812c0a44832721fcbd7272f9f6a465",
"text": "Previous research has shown that people differ in their implicit theories about the essential characteristics of intelligence and emotions. Some people believe these characteristics to be predetermined and immutable (entity theorists), whereas others believe that these characteristics can be changed through learning and behavior training (incremental theorists). The present study provides evidence that in healthy adults (N = 688), implicit beliefs about emotions and emotional intelligence (EI) may influence performance on the ability-based Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Adults in our sample with incremental theories about emotions and EI scored higher on the MSCEIT than entity theorists, with implicit theories about EI showing a stronger relationship to scores than theories about emotions. Although our participants perceived both emotion and EI as malleable, they viewed emotions as more malleable than EI. Women and young adults in general were more likely to be incremental theorists than men and older adults. Furthermore, we found that emotion and EI theories mediated the relationship of gender and age with ability EI. Our findings suggest that people's implicit theories about EI may influence their emotional abilities, which may have important consequences for personal and professional EI training.",
"title": ""
},
{
"docid": "5ea42460dc2bdd2ebc2037e35e01dca9",
"text": "Mobile edge clouds (MECs) are small cloud-like infrastructures deployed in close proximity to users, allowing users to have seamless and low-latency access to cloud services. When users move across different locations, their service applications often need to be migrated to follow the user so that the benefit of MEC is maintained. In this paper, we propose a layered framework for migrating running applications that are encapsulated either in virtual machines (VMs) or containers. We evaluate the migration performance of various real applications under the proposed framework.",
"title": ""
},
{
"docid": "a9052b10f9750d58eb33b9e5d564ee6e",
"text": "Cyber Physical Systems (CPS) play significant role in shaping smart manufacturing systems. CPS integrate computation with physical processes where behaviors are represented in both cyber and physical parts of the system. In order to understand CPS in the context of smart manufacturing, an overview of CPS technologies, components, and relevant standards is presented. A detailed technical review of the existing engineering tools and practices from major control vendors has been conducted. Furthermore, potential research areas have been identified in order to enhance the tools functionalities and capabilities in supporting CPS development process.",
"title": ""
},
{
"docid": "a8f27679e13572d00d5eae3496cec014",
"text": "Today, we are forward to meeting an older people society in the world. The elderly people have become a high risk of dementia or depression. In recent years, with the rapid development of internet of things (IoT) techniques, it has become a feasible solution to build a system that combines IoT and cloud techniques for detecting and preventing the elderly dementia or depression. This paper proposes an IoT-based elderly behavioral difference warning system for early depression and dementia warning. The proposed system is composed of wearable smart glasses, a BLE-based indoor trilateration position, and a cloud-based service platform. As a result, the proposed system can not only reduce human and medical costs, but also improve the cure rate of depression or delay the deterioration of dementia.",
"title": ""
},
{
"docid": "2e4ac47cdc063d76089c17f30a379765",
"text": "Determination of the type and origin of the body fluids found at a crime scene can give important insights into crime scene reconstruction by supporting a link between sample donors and actual criminal acts. For more than a century, numerous types of body fluid identification methods have been developed, such as chemical tests, immunological tests, protein catalytic activity tests, spectroscopic methods and microscopy. However, these conventional body fluid identification methods are mostly presumptive, and are carried out for only one body fluid at a time. Therefore, the use of a molecular genetics-based approach using RNA profiling or DNA methylation detection has been recently proposed to supplant conventional body fluid identification methods. Several RNA markers and tDMRs (tissue-specific differentially methylated regions) which are specific to forensically relevant body fluids have been identified, and their specificities and sensitivities have been tested using various samples. In this review, we provide an overview of the present knowledge and the most recent developments in forensic body fluid identification and discuss its possible practical application to forensic casework.",
"title": ""
},
{
"docid": "05b4df16c35a89ee2a5b9ac482e0a297",
"text": "Intensity-based classification of MR images has proven problematic, even when advanced techniques are used. Intrascan and interscan intensity inhomogeneities are a common source of difficulty. While reported methods have had some success in correcting intrascan inhomogeneities, such methods require supervision for the individual scan. This paper describes a new method called adaptive segmentation that uses knowledge of tissue intensity properties and intensity inhomogeneities to correct and segment MR images. Use of the expectation-maximization (EM) algorithm leads to a method that allows for more accurate segmentation of tissue types as well as better visualization of magnetic resonance imaging (MRI) data, that has proven to be effective in a study that includes more than 1000 brain scans. Implementation and results are described for segmenting the brain in the following types of images: axial (dual-echo spin-echo), coronal [three dimensional Fourier transform (3-DFT) gradient-echo T1-weighted] all using a conventional head coil, and a sagittal section acquired using a surface coil. The accuracy of adaptive segmentation was found to be comparable with manual segmentation, and closer to manual segmentation than supervised multivariant classification while segmenting gray and white matter.",
"title": ""
},
{
"docid": "e2c9c7c26436f0f7ef0067660b5f10b8",
"text": "The naive Bayesian classifier (NBC) is a simple yet very efficient classification technique in machine learning. But the unpractical condition independence assumption of NBC greatly degrades its performance. There are two primary ways to improve NBC's performance. One is to relax the condition independence assumption in NBC. This method improves NBC's accuracy by searching additional condition dependencies among attributes of the samples in a scope. It usually involves in very complex search algorithms. Another is to change the representation of the samples by creating new attributes from the original attributes, and construct NBC from these new attributes while keeping the condition independence assumption. Key problem of this method is to guarantee strong condition independencies among the new attributes. In the paper, a new means of making attribute set, which maps the original attributes to new attributes according to the information geometry and Fisher score, is presented, and then the FS-NBC on the new attributes is constructed. The condition dependence relation among the new attributes theoretically is discussed. We prove that these new attributes are condition independent of each other under certain conditions. The experimental results show that our method improves performance of NBC excellently",
"title": ""
},
{
"docid": "4816f221d67922009a308058139aa56b",
"text": "In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch’s model of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97–117]. This construction is substantially more complicated than the corresponding construction for classical Turing machines (TMs); in fact, even simple primitives such as looping, branching, and composition are not straightforward in the context of quantum Turing machines. We establish how these familiar primitives can be implemented and introduce some new, purely quantum mechanical primitives, such as changing the computational basis and carrying out an arbitrary unitary transformation of polynomially bounded dimension. We also consider the precision to which the transition amplitudes of a quantum Turing machine need to be specified. We prove that O(log T ) bits of precision suffice to support a T step computation. This justifies the claim that the quantum Turing machine model should be regarded as a discrete model of computation and not an analog one. We give the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church–Turing thesis. We show the existence of a problem, relative to an oracle, that can be solved in polynomial time on a quantum Turing machine, but requires superpolynomial time on a bounded-error probabilistic Turing machine, and thus not in the class BPP. The class BQP of languages that are efficiently decidable (with small error-probability) on a quantum Turing machine satisfies BPP ⊆ BQP ⊆ P. Therefore, there is no possibility of giving a mathematical proof that quantum Turing machines are more powerful than classical probabilistic Turing machines (in the unrelativized setting) unless there is a major breakthrough in complexity theory.",
"title": ""
},
{
"docid": "a0d34b1c003b7e88c2871deaaba761ed",
"text": "Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call DRESS (as shorthand for Deep REinforcement Sentence Simplification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.1",
"title": ""
},
{
"docid": "df1ea45a4b20042abd99418ff6d1f44e",
"text": "This paper combines wavelet transforms with basic detection theory to develop a new unsupervised method for robustly detecting and localizing spikes in noisy neural recordings. The method does not require the construction of templates, or the supervised setting of thresholds. We present extensive Monte Carlo simulations, based on actual extracellular recordings, to show that this technique surpasses other commonly used methods in a wide variety of recording conditions. We further demonstrate that falsely detected spikes corresponding to our method resemble actual spikes more than the false positives of other techniques such as amplitude thresholding. Moreover, the simplicity of the method allows for nearly real-time execution.",
"title": ""
},
{
"docid": "da816b4a0aea96feceefe22a67c45be4",
"text": "Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the ‘Story Cloze Test’. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of 50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.",
"title": ""
},
{
"docid": "3e727d70f141f52fb9c432afa3747ceb",
"text": "In this paper, we propose an improvement of Adversarial Transformation Networks(ATN) [1]to generate adversarial examples, which can fool white-box models and blackbox models with a state of the art performance and won the SECOND place in the non-target task in CAAD 2018. In this section, we first introduce the whole architecture about our method, then we present our improvement on loss functions to generate adversarial examples satisfying the L∞ norm restriction in the non-targeted attack problem. Then we illustrate how to use a robust-enhance module to make our adversarial examples more robust and have better transfer-ability. At last we will show our method on how to attack an ensemble of models.",
"title": ""
},
{
"docid": "a0d1d59fc987d90e500b3963ac11b2ad",
"text": "The purpose of this paper is to present the applicability of THOMAS, an architecture specially designed to model agent-based virtual organizations, in the development of a multiagent system for managing and planning routes for clients in a mall. In order to build virtual organizations, THOMAS offers mechanisms to take into account their structure, behaviour, dynamic, norms and environment. Moreover, one of the primary characteristics of the THOMAS architecture is the use of agents with reasoning and planning capabilities. These agents can perform a dynamic reorganization when they detect changes in the environment. The proposed architecture is composed of a set of related modules that are appropriate for developing systems in highly volatile environments similar to the one presented in this study. This paper presents THOMAS as well as the results obtained after having applied the system to a case study. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fd171b73ea88d9b862149e1c1d72aea8",
"text": "Localization of people and devices is one of the main building blocks of context aware systems since the user position represents the core information for detecting user's activities, devices activations, proximity to points of interest, etc. While for outdoor scenarios Global Positioning System (GPS) constitutes a reliable and easily available technology, for indoor scenarios GPS is largely unavailable. In this paper we present a range-based indoor localization system that exploits the Received Signal Strength (RSS) of Bluetooth Low Energy (BLE) beacon packets broadcast by anchor nodes and received by a BLE-enabled device. The method used to infer the user's position is based on stigmergy. We exploit the stigmergic marking process to create an on-line probability map identifying the user's position in the indoor environment.",
"title": ""
},
{
"docid": "b959bce5ea9db71d677586eb1b6f023e",
"text": "We consider autonomous racing of two cars and present an approach to formulate the decision making as a non-cooperative non-zero-sum game. The game is formulated by restricting both players to fulfill static track constraints as well as collision constraints which depend on the combined actions of the two players. At the same time the players try to maximize their own progress. In the case where the action space of the players is finite, the racing game can be reformulated as a bimatrix game. For this bimatrix game, we show that the actions obtained by a sequential maximization approach where only the follower considers the action of the leader are identical to a Stackelberg and a Nash equilibrium in pure strategies. Furthermore, we propose a game promoting blocking, by additionally rewarding the leading car for staying ahead at the end of the horizon. We show that this changes the Stackelberg equilibrium, but has a minor influence on the Nash equilibria. For an online implementation, we propose to play the games in a moving horizon fashion, and we present two methods for guaranteeing feasibility of the resulting coupled repeated games. Finally, we study the performance of the proposed approaches in simulation for a set-up that replicates the miniature race car tested at the Automatic Control Laboratory of ETH Zürich. The simulation study shows that the presented games can successfully model different racing behaviors and generate interesting racing situations.",
"title": ""
},
{
"docid": "516ef94fad7f7e5801bf1ef637ffb136",
"text": "With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.1",
"title": ""
},
{
"docid": "bed29a89354c1dfcebbdde38d1addd1d",
"text": "Eosinophilic skin diseases, commonly termed as eosinophilic dermatoses, refer to a broad spectrum of skin diseases characterized by eosinophil infiltration and/or degranulation in skin lesions, with or without blood eosinophilia. The majority of eosinophilic dermatoses lie in the allergy-related group, including allergic drug eruption, urticaria, allergic contact dermatitis, atopic dermatitis, and eczema. Parasitic infestations, arthropod bites, and autoimmune blistering skin diseases such as bullous pemphigoid, are also common. Besides these, there are several rare types of eosinophilic dermatoses with unknown origin, in which eosinophil infiltration is a central component and affects specific tissue layers or adnexal structures of the skin, such as the dermis, subcutaneous fat, fascia, follicles, and cutaneous vessels. Some typical examples are eosinophilic cellulitis, granuloma faciale, eosinophilic pustular folliculitis, recurrent cutaneous eosinophilic vasculitis, and eosinophilic fasciitis. Although tissue eosinophilia is a common feature shared by these disorders, their clinical and pathological properties differ dramatically. Among these rare entities, eosinophilic pustular folliculitis may be associated with human immunodeficiency virus (HIV) infection or malignancies, and some other diseases, like eosinophilic fasciitis and eosinophilic cellulitis, may be associated with an underlying hematological disorder, while others are considered idiopathic. However, for most of these rare eosinophilic dermatoses, the causes and the pathogenic mechanisms remain largely unknown, and systemic, high-quality clinical investigations are needed for advances in better strategies for clinical diagnosis and treatment. Here, we present a comprehensive review on the etiology, pathogenesis, clinical features, and management of these rare entities, with an emphasis on recent advances and current consensus.",
"title": ""
}
] | scidocsrr |
aefff8b42a9a99977c326fb52e70fbaf | A Novel Association Rule Mining Method of Big Data for Power Transformers State Parameters Based on Probabilistic Graph Model | [
{
"docid": "55b405991dc250cd56be709d53166dca",
"text": "In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as item set concise representations, redundancy reduction, and post processing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient post processing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the post processing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the post processing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process. KeywordsClustering, classification, and association rules, interactive data exploration and discovery, knowledge management applications.",
"title": ""
}
] | [
{
"docid": "49f4fd5bcb184e64a9874b864979eb79",
"text": "A major research goal for compilers and environments is the automatic derivation of tools from formal specifications. However, the formal model of the language is often inadequate; in particular, LR(k) grammars are unable to describe the natural syntax of many languages, such as C++ and Fortran, which are inherently non-deterministic. Designers of batch compilers work around such limitations by combining generated components with ad hoc techniques (for instance, performing partial type and scope analysis in tandem with parsing). Unfortunately, the complexity of incremental systems precludes the use of batch solutions. The inability to generate incremental tools for important languages inhibits the widespread use of language-rich interactive environments.We address this problem by extending the language model itself, introducing a program representation based on parse dags that is suitable for both batch and incremental analysis. Ambiguities unresolved by one stage are retained in this representation until further stages can complete the analysis, even if the reaolution depends on further actions by the user. Representing ambiguity explicitly increases the number and variety of languages that can be analyzed incrementally using existing methods.To create this representation, we have developed an efficient incremental parser for general context-free grammars. Our algorithm combines Tomita's generalized LR parser with reuse of entire subtrees via state-matching. Disambiguation can occur statically, during or after parsing, or during semantic analysis (using existing incremental techniques); program errors that preclude disambiguation retsin multiple interpretations indefinitely. Our representation and analyses gain efficiency by exploiting the local nature of ambiguities: for the SPEC95 C programs, the explicit representation of ambiguity requires only 0.5% additional space and less than 1% additional time during reconstruction.",
"title": ""
},
{
"docid": "ad59ca3f7c945142baf9353eeb68e504",
"text": "This essay considers dynamic security design and corporate financing, with particular emphasis on informational micro-foundations. The central idea is that firm insiders must retain an appropriate share of firm risk, either to align their incentives with those of outside investors (moral hazard) or to signal favorable information about the quality of the firm’s assets. Informational problems lead to inevitable inefficiencies imperfect risk sharing, the possibility of bankruptcy, investment distortions, etc. The design of contracts that minimize these inefficiencies is a central question. This essay explores the implications of dynamic security design on firm operations and asset prices.",
"title": ""
},
{
"docid": "63e58ac7e6f3b4a463e8f8182fee9be5",
"text": "In this work, we propose “global style tokens” (GSTs), a bank of embeddings that are jointly trained within Tacotron, a state-of-the-art end-toend speech synthesis system. The embeddings are trained with no explicit labels, yet learn to model a large range of acoustic expressiveness. GSTs lead to a rich set of significant results. The soft interpretable “labels” they generate can be used to control synthesis in novel ways, such as varying speed and speaking style – independently of the text content. They can also be used for style transfer, replicating the speaking style of a single audio clip across an entire long-form text corpus. When trained on noisy, unlabeled found data, GSTs learn to factorize noise and speaker identity, providing a path towards highly scalable but robust speech synthesis.",
"title": ""
},
{
"docid": "3ea5607d04419aae36592b6dcce25304",
"text": "Optimization problems with rank constraints arise in many applications, including matrix regression, structured PCA, matrix completion and matrix decomposition problems. An attractive heuristic for solving such problems is to factorize the low-rank matrix, and to run projected gradient descent on the nonconvex factorized optimization problem. The goal of this problem is to provide a general theoretical framework for understanding when such methods work well, and to characterize the nature of the resulting fixed point. We provide a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution. Our results are applicable even when the initial solution is outside any region of local convexity, and even when the problem is globally concave. Working in a non-asymptotic framework, we show that our conditions are satisfied for a wide range of concrete models, including matrix regression, structured PCA, matrix completion with real and quantized observations, matrix decomposition, and graph clustering problems. Simulation results show excellent agreement with the theoretical predictions.",
"title": ""
},
{
"docid": "298df39e9b415bc1eed95ed56d3f32df",
"text": "In this work, we present a true 3D 128 Gb 2 bit/cell vertical-NAND (V-NAND) Flash product for the first time. The use of barrier-engineered materials and gate all-around structure in the 3D V-NAND cell exhibits advantages over 1 × nm planar NAND, such as small Vth shift due to small cell coupling and narrow natural Vth distribution. Also, a negative counter-pulse scheme realizes a tightly programmed cell distribution. In order to reduce the effect of a large WL coupling, a glitch-canceling discharge scheme and a pre-offset control scheme is implemented. Furthermore, an external high-voltage supply scheme along with the proper protection scheme for a high-voltage failure is used to achieve low power consumption. The chip accomplishes 50 MB/s write throughput with 3 K endurance for typical embedded applications. Also, extended endurance of 35 K is achieved with 36 MB/s of write throughput for data center and enterprise SSD applications.",
"title": ""
},
{
"docid": "2e1cb87045b5356a965aa52e9e745392",
"text": "Community detection is a common problem in graph data analytics that consists of finding groups of densely connected nodes with few connections to nodes outside of the group. In particular, identifying communities in large-scale networks is an important task in many scientific domains. In this review, we evaluated eight state-of-the-art and five traditional algorithms for overlapping and disjoint community detection on large-scale real-world networks with known ground-truth communities. These 13 algorithms were empirically compared using goodness metrics that measure the structural properties of the identified communities, as well as performance metrics that evaluate these communities against the ground-truth. Our results show that these two types of metrics are not equivalent. That is, an algorithm may perform well in terms of goodness metrics, but poorly in terms of performance metrics, or vice versa. © 2014 The Authors. WIREs Computational Statistics published by Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "4ef36c602963036f928b9dcb75592f78",
"text": "Health care-associated infections constitute one of the greatest challenges of modern medicine. Despite compelling evidence that proper hand washing can reduce the transmission of pathogens to patients and the spread of antimicrobial resistance, the adherence of health care workers to recommended hand-hygiene practices has remained unacceptably low. One of the key elements in improving hand-hygiene practice is the use of an alcohol-based hand rub instead of washing with soap and water. An alcohol-based hand rub requires less time, is microbiologically more effective, and is less irritating to skin than traditional hand washing with soap and water. Therefore, alcohol-based hand rubs should replace hand washing as the standard for hand hygiene in health care settings in all situations in which the hands are not visibly soiled. It is also important to change gloves between each patient contact and to use hand-hygiene procedures after glove removal. Reducing health care-associated infections requires that health care workers take responsibility for ensuring that hand hygiene becomes an everyday part of patient care.",
"title": ""
},
{
"docid": "ef3b9dd6b463940bc57cdf7605c24b1e",
"text": "With the rapid development of cloud storage, data security in storage receives great attention and becomes the top concern to block the spread development of cloud service. In this paper, we systematically study the security researches in the storage systems. We first present the design criteria that are used to evaluate a secure storage system and summarize the widely adopted key technologies. Then, we further investigate the security research in cloud storage and conclude the new challenges in the cloud environment. Finally, we give a detailed comparison among the selected secure storage systems and draw the relationship between the key technologies and the design criteria.",
"title": ""
},
{
"docid": "3a757d129c52b5c07c514d613795afce",
"text": "Camera motion estimation is useful for a range of applications. Usually, feature tracking is performed through the sequence of images to determine correspondences. Furthermore, robust statistical techniques are normally used to handle large number of outliers in correspondences. This paper proposes a new method that avoids both. Motion is calculated between two consecutive stereo images without any pre-knowledge or prediction about feature location or the possibly large camera movement. This permits a lower frame rate and almost arbitrary movements. Euclidean constraints are used to incrementally select inliers from a set of initial correspondences, instead of using robust statistics that has to handle all inliers and outliers together. These constraints are so strong that the set of initial correspondences can contain several times more outliers than inliers. Experiments on a worst-case stereo sequence show that the method is robust, accurate and can be used in real-time.",
"title": ""
},
{
"docid": "d026ebfc24e3e48d0ddb373f71d63162",
"text": "The claustrum has been proposed as a possible neural candidate for the coordination of conscious experience due to its extensive ‘connectome’. Herein we propose that the claustrum contributes to consciousness by supporting the temporal integration of cortical oscillations in response to multisensory input. A close link between conscious awareness and interval timing is suggested by models of consciousness and conjunctive changes in meta-awareness and timing in multiple contexts and conditions. Using the striatal beatfrequency model of interval timing as a framework, we propose that the claustrum integrates varying frequencies of neural oscillations in different sensory cortices into a coherent pattern that binds different and overlapping temporal percepts into a unitary conscious representation. The proposed coordination of the striatum and claustrum allows for time-based dimensions of multisensory integration and decision-making to be incorporated into consciousness.",
"title": ""
},
{
"docid": "e0a8035f9e61c78a482f2e237f7422c6",
"text": "Aims: This paper introduces how substantial decision-making and leadership styles relates with each other. Decision-making styles are connected with leadership practices and institutional arrangements. Study Design: Qualitative research approach was adopted in this study. A semi structure interview was use to elicit data from the participants on both leadership styles and decision-making. Place and Duration of Study: Institute of Education international Islamic University",
"title": ""
},
{
"docid": "4872da79e7d01e8bb2a70ab17c523118",
"text": "In recent years, social media has become a customer touch-point for the business functions of marketing, sales and customer service. We aim to show that intention analysis might be useful to these business functions and that it can be performed effectively on short texts (at the granularity level of a single sentence). We demonstrate a scheme of categorization of intentions that is amenable to automation using simple machine learning techniques that are language-independent. We discuss the grounding that this scheme of categorization has in speech act theory. In the demonstration we go over a number of usage scenarios in an attempt to show that the use of automatic intention detection tools would benefit the business functions of sales, marketing and service. We also show that social media can be used not just to convey pleasure or displeasure (that is, to express sentiment) but also to discuss personal needs and to report problems (to express intentions). We evaluate methods for automatically discovering intentions in text, and establish that it is possible to perform intention analysis on social media with an accuracy of 66.97%± 0.10%.",
"title": ""
},
{
"docid": "ce12e1d38a2757c621a50209db5ce008",
"text": "Schloss Reisensburg. Physica-Verlag, 1994. Summary Traditional tests of the accuracy of statistical software have been based on a few limited paradigms for ordinary least squares regression. Test suites based on these criteria served the statistical computing community well when software was limited to a few simple procedures. Recent developments in statistical computing require both more and less sophisticated measures, however. We need tests for a broader variety of procedures and ones which are more likely to reveal incompetent programming. This paper summarizes these issues.",
"title": ""
},
{
"docid": "04b7d1197e9e5d78e948e0c30cbdfcfe",
"text": "Context: Software development depends significantly on team performance, as does any process that involves human interaction. Objective: Most current development methods argue that teams should self-manage. Our objective is thus to provide a better understanding of the nature of self-managing agile teams, and the teamwork challenges that arise when introducing such teams. Method: We conducted extensive fieldwork for 9 months in a software development company that introduced Scrum. We focused on the human sensemaking, on how mechanisms of teamwork were understood by the people involved. Results: We describe a project through Dickinson and McIntyre’s teamwork model, focusing on the interrelations between essential teamwork components. Problems with team orientation, team leadership and coordination in addition to highly specialized skills and corresponding division of work were important barriers for achieving team effectiveness. Conclusion: Transitioning from individual work to self-managing teams requires a reorientation not only by developers but also by management. This transition takes time and resources, but should not be neglected. In addition to Dickinson and McIntyre’s teamwork components, we found trust and shared mental models to be of fundamental importance. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "29cbdeb95a221820a6425e1249763078",
"text": "The concept of “Industry 4.0” that covers the topics of Internet of Things, cyber-physical system, and smart manufacturing, is a result of increasing demand of mass customized manufacturing. In this paper, a smart manufacturing framework of Industry 4.0 is presented. In the proposed framework, the shop-floor entities (machines, conveyers, etc.), the smart products and the cloud can communicate and negotiate interactively through networks. The shop-floor entities can be considered as agents based on the theory of multi-agent system. These agents implement dynamic reconfiguration in a collaborative manner to achieve agility and flexibility. However, without global coordination, problems such as load-unbalance and inefficiency may occur due to different abilities and performances of agents. Therefore, the intelligent evaluation and control algorithms are proposed to reduce the load-unbalance with the assistance of big data feedback. The experimental results indicate that the presented algorithms can easily be deployed in smart manufacturing system and can improve both load-balance and efficiency.",
"title": ""
},
{
"docid": "ff5d2e3b2c2e5200f70f2644bbc521d6",
"text": "The idea that the conceptual system draws on sensory and motor systems has received considerable experimental support in recent years. Whether the tight coupling between sensory-motor and conceptual systems is modulated by factors such as context or task demands is a matter of controversy. Here, we tested the context sensitivity of this coupling by using action verbs in three different types of sentences in an fMRI study: literal action, apt but non-idiomatic action metaphors, and action idioms. Abstract sentences served as a baseline. The result showed involvement of sensory-motor areas for literal and metaphoric action sentences, but not for idiomatic ones. A trend of increasing sensory-motor activation from abstract to idiomatic to metaphoric to literal sentences was seen. These results support a gradual abstraction process whereby the reliance on sensory-motor systems is reduced as the abstractness of meaning as well as conventionalization is increased, highlighting the context sensitive nature of semantic processing.",
"title": ""
},
{
"docid": "1dcc48994fada1b46f7b294e08f2ed5d",
"text": "This paper presents an application-specific integrated processor for an angular estimation system that works with 9-D inertial measurement units. The application-specific instruction-set processor (ASIP) was implemented on field-programmable gate array and interfaced with a gyro-plus-accelerometer 6-D sensor and with a magnetic compass. Output data were recorded on a personal computer and also used to perform a live demo. During system modeling and design, it was chosen to represent angular position data with a quaternion and to use an extended Kalman filter as sensor fusion algorithm. For this purpose, a novel two-stage filter was designed: The first stage uses accelerometer data, and the second one uses magnetic compass data for angular position correction. This allows flexibility, less computational requirements, and robustness to magnetic field anomalies. The final goal of this work is to realize an upgraded application-specified integrated circuit that controls the microelectromechanical systems (MEMS) sensor and integrates the ASIP. This will allow the MEMS sensor gyro plus accelerometer and the angular estimation system to be contained in a single package; this system might optionally work with an external magnetic compass.",
"title": ""
},
{
"docid": "cf5205e3b27867324ef86f18083653de",
"text": "Sometimes, in order to properly restore teeth, surgical intervention in the form of a crown-lengthening procedure is required. Crown lengthening is a periodontal resective procedure, aimed at removing supporting periodontal structures to gain sound tooth structure above the alveolar crest level. Periodontal health is of paramount importance for all teeth, both sound and restored. For the restorative dentist to utilize crown lengthening, it is important to understand the concept of biologic width, indications, techniques and other principles. This article reviews these basic concepts of clinical crown lengthening and presents four clinical cases utilizing crown lengthening as an integral part of treatments, to restore teeth and their surrounding tissues to health.",
"title": ""
},
{
"docid": "fb87648c3bb77b1d9b162a8e9dbc5e86",
"text": "With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.",
"title": ""
},
{
"docid": "be0f836ec6431b74342b670921ac41f7",
"text": "This paper addresses the issue of expert finding in a social network. The task of expert finding, as one of the most important research issues in social networks, is aimed at identifying persons with relevant expertise or experience for a given topic. In this paper, we propose a propagation-based approach that takes into consideration of both person local information and network information (e.g. relationships between persons). Experimental results show that our approach can outperform the baseline approach.",
"title": ""
}
] | scidocsrr |
01c3e01d851d2eea8a3d24dcf1cc9afa | New prototype of hybrid 3D-biometric facial recognition system | [
{
"docid": "573f12acd3193045104c7d95bbc89f78",
"text": "Automatic Face Recognition is one of the most emphasizing dilemmas in diverse of potential relevance like in different surveillance systems, security systems, authentication or verification of individual like criminals etc. Adjoining of dynamic expression in face causes a broad range of discrepancies in recognition systems. Facial Expression not only exposes the sensation or passion of any person but can also be used to judge his/her mental views and psychosomatic aspects. This paper is based on a complete survey of face recognition conducted under varying facial expressions. In order to analyze different techniques, motion-based, model-based and muscles-based approaches have been used in order to handle the facial expression and recognition catastrophe. The analysis has been completed by evaluating various existing algorithms while comparing their results in general. It also expands the scope for other researchers for answering the question of effectively dealing with such problems.",
"title": ""
}
] | [
{
"docid": "ac29d60761976a263629a93167516fde",
"text": "Abstruct1-V power supply high-speed low-power digital circuit technology with 0.5-pm multithreshold-voltage CMOS (MTCMOS) is proposed. This technology features both lowthreshold voltage and high-threshold voltage MOSFET’s in a single LSI. The low-threshold voltage MOSFET’s enhance speed Performance at a low supply voltage of 1 V or less, while the high-threshold voltage MOSFET’s suppress the stand-by leakage current during the sleep period. This technology has brought about logic gate characteristics of a 1.7-11s propagation delay time and 0.3-pW/MHz/gate power dissipation with a standard load. In addition, an MTCMOS standard cell library has been developed so that conventional CAD tools can be used to lay out low-voltage LSI’s. To demonstrate MTCMOS’s effectiveness, a PLL LSI based on standard cells was designed as a carrying vehicle. 18-MHz operation at 1 V was achieved using a 0.5-pm CMOS process.",
"title": ""
},
{
"docid": "d63591706309cf602404c34de547184f",
"text": "This paper presents an overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning, and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned based on survey results and the authors’ personal experiences during the challenge.Note to Practitioners—Perception, motion planning, grasping, and robotic system engineering have reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semistructured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"title": ""
},
{
"docid": "3ea6de664a7ac43a1602b03b46790f0a",
"text": "After reviewing the design of a class of lowpass recursive digital filters having integer multiplier and linear phase characteristics, the possibilities for extending the class to include high pass, bandpass, and bandstop (‘notch’) filters are described. Experience with a PDP 11 computer has shown that these filters may be programmed simply using machine code, and that online operation at sampling rates up to about 8 kHz is possible. The practical application of such filters is illustrated by using a notch desgin to remove mains-frequency interference from an e.c.g. waveform. Après avoir passé en revue la conception d'un type de filtres digitaux récurrents passe-bas à multiplicateurs incorporés et à caractéristiques de phase linéaires, cet article décrit les possibilités d'extension de ce type aux filtres, passe-haut, passe-bande et à élimination de bande. Une expérience menée avec un ordinateur PDP 11 a indiqué que ces filtres peuvent être programmés de manière simple avec un code machine, et qu'il est possible d'effectuer des opérations en ligne avec des taux d'échantillonnage jusqu'à environ 8 kHz. L'application pratique de tels filtres est illustrée par un exemple dans lequel un filtre à élimination de bande est utilisé pour éliminer les interférences due à la fréquence du courant d'alimentation dans un tracé d'e.c.g. Nach einer Untersuchung der Konstruktion einer Gruppe von Rekursivdigitalfiltern mit niedrigem Durchlässigkeitsbereich und mit ganzzahligen Multipliziereinrichtungen und Linearphaseneigenschaften werden die Möglichkeiten beschrieben, die Gruppe so zu erweitern, daß sie Hochfilter, Bandpaßfilter und Bandstopfilter (“Kerbfilter”) einschließt. Erfahrungen mit einem PDP 11-Computer haben gezeigt, daß diese Filter auf einfache Weise unter Verwendung von Maschinenkode programmiert werden können und daß On-Line-Betrieb bei Entnahmegeschwindigkeiten von bis zu 8 kHz möglich ist. Die praktische Anwendung solcher Filter wird durch Verwendung einer Kerbkonstruktion zur Ausscheidung von Netzfrequenzstörungen von einer ECG-Wellenform illustriert.",
"title": ""
},
{
"docid": "5d21df36697616719bcc3e0ee22a08bd",
"text": "In spite of the significant recent progress, the incorporation of haptics into virtual environments is still in its infancy due to limitations in the hardware, the cost of development, as well as the level of reality they provide. Nonetheless, we believe that the field will one day be one of the groundbreaking media of the future. It has its current holdups but the promise of the future is worth the wait. The technology is becoming cheaper and applications are becoming more forthcoming and apparent. If we can survive this infancy, it will promise to be an amazing revolution in the way we interact with computers and the virtual world. The researchers organize the rapidly increasing multidisciplinary research of haptics into four subareas: human haptics, machine haptics, computer haptics, and multimedia haptics",
"title": ""
},
{
"docid": "4c12d10fd9c2a12e56b56f62f99333f3",
"text": "The science of large-scale brain networks offers a powerful paradigm for investigating cognitive and affective dysfunction in psychiatric and neurological disorders. This review examines recent conceptual and methodological developments which are contributing to a paradigm shift in the study of psychopathology. I summarize methods for characterizing aberrant brain networks and demonstrate how network analysis provides novel insights into dysfunctional brain architecture. Deficits in access, engagement and disengagement of large-scale neurocognitive networks are shown to play a prominent role in several disorders including schizophrenia, depression, anxiety, dementia and autism. Synthesizing recent research, I propose a triple network model of aberrant saliency mapping and cognitive dysfunction in psychopathology, emphasizing the surprising parallels that are beginning to emerge across psychiatric and neurological disorders.",
"title": ""
},
{
"docid": "705b2a837b51ac5e354e1ec0df64a52a",
"text": "BACKGROUND\nGeneralized anxiety disorder (GAD) is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioural treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. To overcome this limitation we propose the use of virtual reality (VR) to facilitate the relaxation process by visually presenting key relaxing images to the subjects. The visual presentation of a virtual calm scenario can facilitate patients' practice and mastery of relaxation, making the experience more vivid and real than the one that most subjects can create using their own imagination and memory, and triggering a broad empowerment process within the experience induced by a high sense of presence. According to these premises, the aim of the present study is to investigate the advantages of using a VR-based relaxation protocol in reducing anxiety in patients affected by GAD.\n\n\nMETHODS/DESIGN\nThe trial is based on a randomized controlled study, including three groups of 25 patients each (for a total of 75 patients): (1) the VR group, (2) the non-VR group and (3) the waiting list (WL) group. Patients in the VR group will be taught to relax using a VR relaxing environment and audio-visual mobile narratives; patients in the non-VR group will be taught to relax using the same relaxing narratives proposed to the VR group, but without the VR support, and patients in the WL group will not receive any kind of relaxation training. Psychometric and psychophysiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as qualitative dependent variables.\n\n\nCONCLUSION\nWe argue that the use of VR for relaxation represents a promising approach in the treatment of GAD since it enhances the quality of the relaxing experience through the elicitation of the sense of presence. This controlled trial will be able to evaluate the effects of the use of VR in relaxation while preserving the benefits of randomization to reduce bias.\n\n\nTRIAL REGISTRATION\nNCT00602212 (ClinicalTrials.gov).",
"title": ""
},
{
"docid": "2549177f9367d5641a7fc4dfcfaf5c0a",
"text": "Educational data mining is an emerging trend, concerned with developing methods for exploring the huge data that come from the educational system. This data is used to derive the knowledge which is useful in decision making. EDM methods are useful to measure the performance of students, assessment of students and study students’ behavior etc. In recent years, Educational data mining has proven to be more successful at many of the educational statistics problems due to enormous computing power and data mining algorithms. This paper surveys the history and applications of data mining techniques in the educational field. The objective is to introduce data mining to traditional educational system, web-based educational system, intelligent tutoring system, and e-learning. This paper describes how to apply the main data mining methods such as prediction, classification, relationship mining, clustering, and",
"title": ""
},
{
"docid": "9b7ca6e8b7bf87ef61e70ab4c720ec40",
"text": "The support vector machine (SVM) is a widely used tool in classification problems. The SVM trains a classifier by solving an optimization problem to decide which instances of the training data set are support vectors, which are the necessarily informative instances to form the SVM classifier. Since support vectors are intact tuples taken from the training data set, releasing the SVM classifier for public use or shipping the SVM classifier to clients will disclose the private content of support vectors. This violates the privacy-preserving requirements for some legal or commercial reasons. The problem is that the classifier learned by the SVM inherently violates the privacy. This privacy violation problem will restrict the applicability of the SVM. To the best of our knowledge, there has not been work extending the notion of privacy preservation to tackle this inherent privacy violation problem of the SVM classifier. In this paper, we exploit this privacy violation problem, and propose an approach to postprocess the SVM classifier to transform it to a privacy-preserving classifier which does not disclose the private content of support vectors. The postprocessed SVM classifier without exposing the private content of training data is called Privacy-Preserving SVM Classifier (abbreviated as PPSVC). The PPSVC is designed for the commonly used Gaussian kernel function. It precisely approximates the decision function of the Gaussian kernel SVM classifier without exposing the sensitive attribute values possessed by support vectors. By applying the PPSVC, the SVM classifier is able to be publicly released while preserving privacy. We prove that the PPSVC is robust against adversarial attacks. The experiments on real data sets show that the classification accuracy of the PPSVC is comparable to the original SVM classifier.",
"title": ""
},
{
"docid": "e6c32d3fd1bdbfb2cc8742c9b670ce97",
"text": "A framework for skill acquisition is proposed that includes two major stages in the development of a cognitive skill: a declarative stage in which facts about the skill domain are interpreted and a procedural stage in which the domain knowledge is directly embodied in procedures for performing the skill. This general framework has been instantiated in the ACT system in which facts are encoded in a propositional network and procedures are encoded as productions. Knowledge compilation is the process by which the skill transits from the declarative stage to the procedural stage. It consists of the subprocesses of composition, which collapses sequences of productions into single productions, and proceduralization, which embeds factual knowledge into productions. Once proceduralized, further learning processes operate on the skill to make the productions more selective in their range of applications. These processes include generalization, discrimination, and strengthening of productions. Comparisons are made to similar concepts from past learning theories. How these learning mechanisms apply to produce the power law speedup in processing time with practice is discussed.",
"title": ""
},
{
"docid": "641811eac0e8a078cf54130c35fd6511",
"text": "Multi-label text classification (MLTC) aims to assign multiple labels to each sample in the dataset. The labels usually have internal correlations. However, traditional methods tend to ignore the correlations between labels. In order to capture the correlations between labels, the sequence-tosequence (Seq2Seq) model views the MLTC task as a sequence generation problem, which achieves excellent performance on this task. However, the Seq2Seq model is not suitable for the MLTC task in essence. The reason is that it requires humans to predefine the order of the output labels, while some of the output labels in the MLTC task are essentially an unordered set rather than an ordered sequence. This conflicts with the strict requirement of the Seq2Seq model for the label order. In this paper, we propose a novel sequence-toset framework utilizing deep reinforcement learning, which not only captures the correlations between labels, but also reduces the dependence on the label order. Extensive experimental results show that our proposed method outperforms the competitive baselines by a large margin.",
"title": ""
},
{
"docid": "23bf81699add38814461d5ac3e6e33db",
"text": "This paper examined a steering behavior based fatigue monitoring system. The advantages of using steering behavior for detecting fatigue are that these systems measure continuously, cheaply, non-intrusively, and robustly even under extremely demanding environmental conditions. The expected fatigue induced changes in steering behavior are a pattern of slow drifting and fast corrective counter steering. Using advanced signal processing procedures for feature extraction, we computed 3 feature set in the time, frequency and state space domain (a total number of 1251 features) to capture fatigue impaired steering patterns. Each feature set was separately fed into 5 machine learning methods (e.g. Support Vector Machine, K-Nearest Neighbor). The outputs of each single classifier were combined to an ensemble classification value. Finally we combined the ensemble values of 3 feature subsets to a of meta-ensemble classification value. To validate the steering behavior analysis, driving samples are taken from a driving simulator during a sleep deprivation study (N=12). We yielded a recognition rate of 86.1% in classifying slight from strong fatigue.",
"title": ""
},
{
"docid": "f6dd10d4b400234a28b221d0527e71c0",
"text": "Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English–German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English– Romanian.",
"title": ""
},
{
"docid": "6fad371eecbb734c1e54b8fb9ae218c4",
"text": "Quantitative Susceptibility Mapping (QSM) is a novel MRI based technique that relies on estimates of the magnetic field distribution in the tissue under examination. Several sophisticated data processing steps are required to extract the magnetic field distribution from raw MRI phase measurements. The objective of this review article is to provide a general overview and to discuss several underlying assumptions and limitations of the pre-processing steps that need to be applied to MRI phase data before the final field-to-source inversion can be performed. Beginning with the fundamental relation between MRI signal and tissue magnetic susceptibility this review covers the reconstruction of magnetic field maps from multi-channel phase images, background field correction, and provides an overview of state of the art QSM solution strategies.",
"title": ""
},
{
"docid": "13bd6515467934ba7855f981fd4f1efd",
"text": "The flourishing synergy arising between organized crimes and the Internet has increased the insecurity of the digital world. How hackers frame their actions? What factors encourage and energize their behavior? These are very important but highly underresearched questions. We draw upon literatures on psychology, economics, international relation and warfare to propose a framework that addresses these questions. We found that countries across the world differ in terms of regulative, normative and cognitive legitimacy to different types of web attacks. Cyber wars and crimes are also functions of the stocks of hacking skills relative to the availability of economic opportunities. An attacking unit’s selection criteria for the target network include symbolic significance and criticalness, degree of digitization of values and weakness in defense mechanisms. Managerial and policy implications are discussed and directions for future research are suggested.",
"title": ""
},
{
"docid": "f28170dcc3c4949c27ee609604c53bc2",
"text": "Debates over Cannabis sativa L. and C. indica Lam. center on their taxonomic circumscription and rank. This perennial puzzle has been compounded by the viral spread of a vernacular nomenclature, “Sativa” and “Indica,” which does not correlate with C. sativa and C. indica. Ambiguities also envelop the epithets of wild-type Cannabis: the spontanea versus ruderalis debate (i.e., vernacular “Ruderalis”), as well as another pair of Cannabis epithets, afghanica and kafirstanica. To trace the rise of vernacular nomenclature, we begin with the protologues (original descriptions, synonymies, type specimens) of C. sativa and C. indica. Biogeographical evidence (obtained from the literature and herbarium specimens) suggests 18th–19th century botanists were biased in their assignment of these taxa to field specimens. This skewed the perception of Cannabis biodiversity and distribution. The development of vernacular “Sativa,” “Indica,” and “Ruderalis” was abetted by twentieth century botanists, who ignored original protologues and harbored their own cultural biases. Predominant taxonomic models by Vavilov, Small, Schultes, de Meijer, and Hillig are compared and critiqued. Small’s model adheres closest to protologue data (with C. indica treated as a subspecies). “Sativa” and “Indica” are subpopulations of C. sativa subsp. indica; “Ruderalis” represents a protean assortment of plants, including C. sativa subsp. sativa and recent hybrids.",
"title": ""
},
{
"docid": "c0a75bf3a2d594fb87deb7b9f58a8080",
"text": "For WikiText-103 we swept over LSTM hidden sizes {1024, 2048, 4096}, no. LSTM layers {1, 2}, embedding dropout {0, 0.1, 0.2, 0.3}, use of layer norm (Ba et al., 2016b) {True,False}, and whether to share the input/output embedding parameters {True,False} totalling 96 parameters. A single-layer LSTM with 2048 hidden units with tied embedding parameters and an input dropout rate of 0.3 was selected, and we used this same model configuration for the other language corpora. We trained the models on 8 P100 Nvidia GPUs by splitting the batch size into 8 sub-batches, sending them to each GPU and summing the resulting gradients. The total batch size used was 512 and a sequence length of 100 was chosen. Gradients were clipped to a maximum norm value of 0.1. We did not pass the state of the LSTM between sequences during training, however the state is passed during evaluation.",
"title": ""
},
{
"docid": "bd9f584e7dbc715327b791e20cd20aa9",
"text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.",
"title": ""
},
{
"docid": "ab97caed9c596430c3d76ebda55d5e6e",
"text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.",
"title": ""
},
{
"docid": "9f9719336bf6497d7c71590ac61a433b",
"text": "College and universities are increasingly using part-time, adjunct instructors on their faculties to facilitate greater fiscal flexibility. However, critics argue that the use of adjuncts is causing the quality of higher education to deteriorate. This paper addresses questions about the impact of adjuncts on student outcomes. Using a unique dataset of public four-year colleges in Ohio, we quantify how having adjunct instructors affects student persistence after the first year. Because students taking courses from adjuncts differ systematically from other students, we use an instrumental variable strategy to address concerns about biases. The findings suggest that, in general, students taking an \"adjunct-heavy\" course schedule in their first semester are adversely affected. They are less likely to persist into their second year. We reconcile these findings with previous research that shows that adjuncts may encourage greater student interest in terms of major choice and subsequent enrollments in some disciplines, most notably fields tied closely to specific professions. The authors are grateful for helpful suggestions from Ronald Ehrenberg and seminar participants at the NBER Labor Studies Meetings. The authors also thank the Ohio Board of Regents for their support during this research project. Rod Chu, Darrell Glenn, Robert Sheehan, and Andy Lechler provided invaluable access and help with the data. Amanda Starc, James Carlson, Erin Riley, and Suzan Akin provided excellent research assistance. All opinions and mistakes are our own. The authors worked equally on the project and are listed alphabetically.",
"title": ""
},
{
"docid": "115fb4dcd7d5a1240691e430cd107dce",
"text": "Human motion capture data, which are used to animate animation characters, have been widely used in many areas. To satisfy the high-precision requirement, human motion data are captured with a high frequency (120 frames/s) by a high-precision capture system. However, the high frequency and nonlinear structure make the storage, retrieval, and browsing of motion data challenging problems, which can be solved by keyframe extraction. Current keyframe extraction methods do not properly model two important characteristics of motion data, i.e., sparseness and Riemannian manifold structure. Therefore, we propose a new model called joint kernel sparse representation (SR), which is in marked contrast to all current keyframe extraction methods for motion data and can simultaneously model the sparseness and the Riemannian manifold structure. The proposed model completes the SR in a kernel-induced space with a geodesic exponential kernel, whereas the traditional SR cannot model the nonlinear structure of motion data in the Euclidean space. Meanwhile, because of several important modifications to traditional SR, our model can also exploit the relations between joints and solve two problems, i.e., the unreasonable distribution and redundancy of extracted keyframes, which current methods do not solve. Extensive experiments demonstrate the effectiveness of the proposed method.",
"title": ""
}
] | scidocsrr |
d956c35ab4e217a8c4517f565197d4a9 | Pressure ulcer prevention and healing using alternating pressure mattress at home: the PARESTRY project. | [
{
"docid": "511c90eadbbd4129fdf3ee9e9b2187d3",
"text": "BACKGROUND\nPressure ulcers are associated with substantial health burdens but may be preventable.\n\n\nPURPOSE\nTo review the clinical utility of pressure ulcer risk assessment instruments and the comparative effectiveness of preventive interventions in persons at higher risk.\n\n\nDATA SOURCES\nMEDLINE (1946 through November 2012), CINAHL, the Cochrane Library, grant databases, clinical trial registries, and reference lists.\n\n\nSTUDY SELECTION\nRandomized trials and observational studies on effects of using risk assessment on clinical outcomes and randomized trials of preventive interventions on clinical outcomes.\n\n\nDATA EXTRACTION\nMultiple investigators abstracted and checked study details and quality using predefined criteria.\n\n\nDATA SYNTHESIS\nOne good-quality trial found no evidence that use of a pressure ulcer risk assessment instrument, with or without a protocolized intervention strategy based on assessed risk, reduces risk for incident pressure ulcers compared with less standardized risk assessment based on nurses' clinical judgment. In higher-risk populations, 1 good-quality and 4 fair-quality randomized trials found that more advanced static support surfaces were associated with lower risk for pressure ulcers compared with standard mattresses (relative risk range, 0.20 to 0.60). Evidence on the effectiveness of low-air-loss and alternating-air mattresses was limited, with some trials showing no clear differences from advanced static support surfaces. Evidence on the effectiveness of nutritional supplementation, repositioning, and skin care interventions versus usual care was limited and had methodological shortcomings, precluding strong conclusions.\n\n\nLIMITATION\nOnly English-language articles were included, publication bias could not be formally assessed, and most studies had methodological shortcomings.\n\n\nCONCLUSION\nMore advanced static support surfaces are more effective than standard mattresses for preventing ulcers in higher-risk populations. The effectiveness of formal risk assessment instruments and associated intervention protocols compared with less standardized assessment methods and the effectiveness of other preventive interventions compared with usual care have not been clearly established.",
"title": ""
},
{
"docid": "df5c384e9fb6ba57a5bbd7fef44ce5f0",
"text": "CONTEXT\nPressure ulcers are common in a variety of patient settings and are associated with adverse health outcomes and high treatment costs.\n\n\nOBJECTIVE\nTo systematically review the evidence examining interventions to prevent pressure ulcers.\n\n\nDATA SOURCES AND STUDY SELECTION\nMEDLINE, EMBASE, and CINAHL (from inception through June 2006) and Cochrane databases (through issue 1, 2006) were searched to identify relevant randomized controlled trials (RCTs). UMI Proquest Digital Dissertations, ISI Web of Science, and Cambridge Scientific Abstracts were also searched. All searches used the terms pressure ulcer, pressure sore, decubitus, bedsore, prevention, prophylactic, reduction, randomized, and clinical trials. Bibliographies of identified articles were further reviewed.\n\n\nDATA SYNTHESIS\nFifty-nine RCTs were selected. Interventions assessed in these studies were grouped into 3 categories, ie, those addressing impairments in mobility, nutrition, or skin health. Methodological quality for the RCTs was variable and generally suboptimal. Effective strategies that addressed impaired mobility included the use of support surfaces, mattress overlays on operating tables, and specialized foam and specialized sheepskin overlays. While repositioning is a mainstay in most pressure ulcer prevention protocols, there is insufficient evidence to recommend specific turning regimens for patients with impaired mobility. In patients with nutritional impairments, dietary supplements may be beneficial. The incremental benefit of specific topical agents over simple moisturizers for patients with impaired skin health is unclear.\n\n\nCONCLUSIONS\nGiven current evidence, using support surfaces, repositioning the patient, optimizing nutritional status, and moisturizing sacral skin are appropriate strategies to prevent pressure ulcers. Although a number of RCTs have evaluated preventive strategies for pressure ulcers, many of them had important methodological limitations. There is a need for well-designed RCTs that follow standard criteria for reporting nonpharmacological interventions and that provide data on cost-effectiveness for these interventions.",
"title": ""
}
] | [
{
"docid": "0e60cb8f9147f5334c3cfca2880c2241",
"text": "The quest for automatic Programming is the holy grail of artificial intelligence. The dream of having computer programs write other useful computer programs has haunted researchers since the nineteen fifties. In Genetic Progvamming III Darwinian Invention and Problem Solving (GP?) by John R. Koza, Forest H. Bennet 111, David Andre, and Martin A. Keane, the authors claim that the first inscription on this trophy should be the name Genetic Programming (GP). GP is about applying evolutionary algorithms to search the space of computer programs. The authors paraphrase Arthur Samuel of 1959 and argue that with this method it is possible to tell the computer what to do without telling it explicitly how t o do it.",
"title": ""
},
{
"docid": "9001f640ae3340586f809ab801f78ec0",
"text": "A correct perception of road signalizations is required for autonomous cars to follow the traffic codes. Road marking is a signalization present on road surfaces and commonly used to inform the correct lane cars must keep. Cameras have been widely used for road marking detection, however they are sensible to environment illumination. Some LIDAR sensors return infrared reflective intensity information which is insensible to illumination condition. Existing road marking detectors that analyzes reflective intensity data focus only on lane markings and ignores other types of signalization. We propose a road marking detector based on Otsu thresholding method that make possible segment LIDAR point clouds into asphalt and road marking. The results show the possibility of detecting any road marking (crosswalks, continuous lines, dashed lines). The road marking detector has also been integrated with Monte Carlo localization method so that its performance could be validated. According to the results, adding road markings onto curb maps lead to a lateral localization error of 0.3119 m.",
"title": ""
},
{
"docid": "6a15a0a0b9b8abc0e66fa9702cc3a573",
"text": "Knowledge Graphs have proven to be extremely valuable to recommender systems, as they enable hybrid graph-based recommendation models encompassing both collaborative and content information. Leveraging this wealth of heterogeneous information for top-N item recommendation is a challenging task, as it requires the ability of effectively encoding a diversity of semantic relations and connectivity patterns. In this work, we propose entity2rec, a novel approach to learning user-item relatedness from knowledge graphs for top-N item recommendation. We start from a knowledge graph modeling user-item and item-item relations and we learn property-specific vector representations of users and items applying neural language models on the network. These representations are used to create property-specific user-item relatedness features, which are in turn fed into learning to rank algorithms to learn a global relatedness model that optimizes top-N item recommendations. We evaluate the proposed approach in terms of ranking quality on the MovieLens 1M dataset, outperforming a number of state-of-the-art recommender systems, and we assess the importance of property-specific relatedness scores on the overall ranking quality.",
"title": ""
},
{
"docid": "dae877409dca88fc6fed5cf6536e65ad",
"text": "My 1971 Turing Award Lecture was entitled \"Generality in Artificial Intelligence.\" The topic turned out to have been overambitious in that I discovered I was unable to put my thoughts on the subject in a satisfactory written form at that time. It would have been better to have reviewed my previous work rather than attempt something new, but such was not my custom at that time.\nI am grateful to ACM for the opportunity to try again. Unfortunately for our science, although perhaps fortunately for this project, the problem of generality in artificial intelligence (AI) is almost as unsolved as ever, although we now have many ideas not available in 1971. This paper relies heavily on such ideas, but it is far from a full 1987 survey of approaches for achieving generality. Ideas are therefore discussed at a length proportional to my familiarity with them rather than according to some objective criterion.\nIt was obvious in 1971 and even in 1958 that AI programs suffered from a lack of generality. It is still obvious; there are many more details. The first gross symptom is that a small addition to the idea of a program often involves a complete rewrite beginning with the data structures. Some progress has been made in modularizing data structures, but small modifications of the search strategies are even less likely to be accomplished without rewriting.\nAnother symptom is no one knows how to make a general database of commonsense knowledge that could be used by any program that needed the knowledge. Along with other information, such a database would contain what a robot would need to know about the effects of moving objects around, what a person can be expected to know about his family, and the facts about buying and selling. This does not depend on whether the knowledge is to be expressed in a logical language or in some other formalism. When we take the logic approach to AI, lack of generality shows up in that the axioms we devise to express commonsense knowledge are too restricted in their applicability for a general commonsense database. In my opinion, getting a language for expressing general commonsense knowledge for inclusion in a general database is the key problem of generality in AI.\nHere are some ideas for achieving generality proposed both before and after 1971. I repeat my disclaimer of comprehensiveness.",
"title": ""
},
{
"docid": "a5f17126a90b45921f70439ff96a0091",
"text": "Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.",
"title": ""
},
{
"docid": "4cdef79370abcd380357c8be92253fa5",
"text": "In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures. Experiments using data from the Prague Dependency Treebank show that the combined system can handle nonprojective constructions with a precision sufficient to yield a significant improvement in overall parsing accuracy. This leads to the best reported performance for robust non-projective parsing of Czech.",
"title": ""
},
{
"docid": "cc90d1ac6aa63532282568f66ecd25fd",
"text": "Melphalan has been used in the treatment of various hematologic malignancies for almost 60 years. Today it is part of standard therapy for multiple myeloma and also as part of myeloablative regimens in association with autologous allogenic stem cell transplantation. Melflufen (melphalan flufenamide ethyl ester, previously called J1) is an optimized derivative of melphalan providing targeted delivery of active metabolites to cells expressing aminopeptidases. The activity of melflufen has compared favorably with that of melphalan in a series of in vitro and in vivo experiments performed preferentially on different solid tumor models and multiple myeloma. Melflufen is currently being evaluated in a clinical phase I/II trial in relapsed or relapsed and refractory multiple myeloma. Cytotoxicity of melflufen was assayed in lymphoma cell lines and in primary tumor cells with the Fluorometric Microculture Cytotoxicity Assay and cell cycle analyses was performed in two of the cell lines. Melflufen was also investigated in a xenograft model with subcutaneous lymphoma cells inoculated in mice. Melflufen showed activity with cytotoxic IC50-values in the submicromolar range (0.011-0.92 μM) in the cell lines, corresponding to a mean of 49-fold superiority (p < 0.001) in potency vs. melphalan. In the primary cultures melflufen yielded slightly lower IC50-values (2.7 nM to 0.55 μM) and an increased ratio vs. melphalan (range 13–455, average 108, p < 0.001). Treated cell lines exhibited a clear accumulation in the G2/M-phase of the cell cycle. Melflufen also showed significant activity and no, or minimal side effects in the xenografted animals. This study confirms previous reports of a targeting related potency superiority of melflufen compared to that of melphalan. Melflufen was active in cell lines and primary cultures of lymphoma cells, as well as in a xenograft model in mice and appears to be a candidate for further evaluation in the treatment of this group of malignant diseases.",
"title": ""
},
{
"docid": "b3f5176f49b467413d172134b1734ed8",
"text": "Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset [1]. In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge.",
"title": ""
},
{
"docid": "1768ecf6a2d8a42ea701d7f242edb472",
"text": "Satisfaction prediction is one of the prime concerns in search performance evaluation. It is a non-trivial task for two major reasons: (1) The definition of satisfaction is rather subjective and different users may have different opinions in satisfaction judgement. (2) Most existing studies on satisfaction prediction mainly rely on users' click-through or query reformulation behaviors but there are many sessions without such kind of interactions. To shed light on these research questions, we construct an experimental search engine that could collect users' satisfaction feedback as well as mouse click-through/movement data. Different from existing studies, we compare for the first time search users' and external assessors' opinions on satisfaction. We find that search users pay more attention to the utility of results while external assessors emphasize on the efforts spent in search sessions. Inspired by recent studies in predicting result relevance based on mouse movement patterns (namely motifs), we propose to estimate the utilities of search results and the efforts in search sessions with motifs extracted from mouse movement data on search result pages (SERPs). Besides the existing frequency-based motif selection method, two novel selection strategies (distance-based and distribution-based) are also adopted to extract high quality motifs for satisfaction prediction. Experimental results on over 1,000 user sessions show that the proposed strategies outperform existing methods and also have promising generalization capability for different users and queries.",
"title": ""
},
{
"docid": "be9971903bf3d754ed18cc89cf254bd1",
"text": "This paper presents a semi-supervised learning method for improving the performance of AUC-optimized classifiers by using both labeled and unlabeled samples. In actual binary classification tasks, there is often an imbalance between the numbers of positive and negative samples. For such imbalanced tasks, the area under the ROC curve (AUC) is an effective measure with which to evaluate binary classifiers. The proposed method utilizes generative models to assist the incorporation of unlabeled samples in AUC-optimized classifiers. The generative models provide prior knowledge that helps learn the distribution of unlabeled samples. To evaluate the proposed method in text classification, we employed naive Bayes models as the generative models. Our experimental results using three test collections confirmed that the proposed method provided better classifiers for imbalanced tasks than supervised AUC-optimized classifiers and semi-supervised classifiers trained to maximize the classification accuracy of labeled samples. Moreover, the proposed method improved the effect of using unlabeled samples for AUC optimization especially when we used appropriate generative models.",
"title": ""
},
{
"docid": "43233e45f07b80b8367ac1561356888d",
"text": "Current Zero-Shot Learning (ZSL) approaches are restricted to recognition of a single dominant unseen object category in a test image. We hypothesize that this setting is ill-suited for real-world applications where unseen objects appear only as a part of a complex scene, warranting both the ‘recognition’ and ‘localization’ of an unseen category. To address this limitation, we introduce a new ‘Zero-Shot Detection’ (ZSD) problem setting, which aims at simultaneously recognizing and locating object instances belonging to novel categories without any training examples. We also propose a new experimental protocol for ZSD based on the highly challenging ILSVRC dataset, adhering to practical issues, e.g., the rarity of unseen objects. To the best of our knowledge, this is the first end-to-end deep network for ZSD that jointly models the interplay between visual and semantic domain information. To overcome the noise in the automatically derived semantic descriptions, we utilize the concept of meta-classes to design an original loss function that achieves synergy between max-margin class separation and semantic space clustering. Furthermore, we present a baseline approach extended from recognition to detection setting. Our extensive experiments show significant performance boost over the baseline on the imperative yet difficult ZSD problem.",
"title": ""
},
{
"docid": "65b2d6ea5e1089c52378b4fd6386224c",
"text": "In traffic environment, conventional FMCW radar with triangular transmit waveform may bring out many false targets in multi-target situations and result in a high false alarm rate. An improved FMCW waveform and multi-target detection algorithm for vehicular applications is presented. The designed waveform in each small cycle is composed of two-segment: LFM section and constant frequency section. They have the same duration, yet in two adjacent small cycles the two LFM slopes are opposite sign and different size. Then the two adjacent LFM bandwidths are unequal. Within a determinate frequency range, the constant frequencies are modulated by a unique PN code sequence for different automotive radar in a big period. Corresponding to the improved waveform, which combines the advantages of both FSK and FMCW formats, a judgment algorithm is used in the continuous small cycle to further eliminate the false targets. The combination of unambiguous ranges and relative velocities can confirm and cancel most false targets in two adjacent small cycles.",
"title": ""
},
{
"docid": "ffa5ae359807884c2218b92d2db2a584",
"text": "We present a method for automatically classifying consumer health questions. Our thirteen question types are designed to aid in the automatic retrieval of medical answers from consumer health resources. To our knowledge, this is the first machine learning-based method specifically for classifying consumer health questions. We demonstrate how previous approaches to medical question classification are insufficient to achieve high accuracy on this task. Additionally, we describe, manually annotate, and automatically classify three important question elements that improve question classification over previous techniques. Our results and analysis illustrate the difficulty of the task and the future directions that are necessary to achieve high-performing consumer health question classification.",
"title": ""
},
{
"docid": "9bce495ed14617fe05086f06be8279e0",
"text": "In previous chapters we reviewed Bayesian neural networks (BNNs) and historical techniques for approximate inference in these, as well as more recent approaches. We discussed the advantages and disadvantages of different techniques, examining their practicality. This, perhaps, is the most important aspect of modern techniques for approximate inference in BNNs. The field of deep learning is pushed forward by practitioners, working on real-world problems. Techniques which cannot scale to complex models with potentially millions of parameters, scale well with large amounts of data, need well studied models to be radically changed, or are not accessible to engineers, will simply perish. In this chapter we will develop on the strand of work of [Graves, 2011; Hinton and Van Camp, 1993], but will do so from the Bayesian perspective rather than the information theory one. Developing Bayesian approaches to deep learning, we will tie approximate BNN inference together with deep learning stochastic regularisation techniques (SRTs) such as dropout. These regularisation techniques are used in many modern deep learning tools, allowing us to offer a practical inference technique. We will start by reviewing in detail the tools used by [Graves, 2011]. We extend on these with recent research, commenting and analysing the variance of several stochastic estimators in variational inference (VI). Following that we will tie these derivations to SRTs, and propose practical techniques to obtain model uncertainty, even from existing models. We finish the chapter by developing specific examples for image based models (CNNs) and sequence based models (RNNs). These will be demonstrated in chapter 5, where we will survey recent research making use of the suggested tools in real-world problems.",
"title": ""
},
{
"docid": "87b67f9ed23c27a71b6597c94ccd6147",
"text": "Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal tran-sitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks.",
"title": ""
},
{
"docid": "56ff9c1be08569b6a881b070b0173797",
"text": "This paper examines a set of commercially representative embedded programs and compares them to an existing benchmark suite, SPEC2000. A new version of SimpleScalar that has been adapted to the ARM instruction set is used to characterize the performance of the benchmarks using configurations similar to current and next generation embedded processors. Several characteristics distinguish the representative embedded programs from the existing SPEC benchmarks including instruction distribution, memory behavior, and available parallelism. The embedded benchmarks, called MiBench, are freely available to all researchers.",
"title": ""
},
{
"docid": "ef598ba4f9a4df1f42debc0eabd1ead8",
"text": "Software developers interact with the development environments they use by issuing commands that execute various programming tools, from source code formatters to build tools. However, developers often only use a small subset of the commands offered by modern development environments, reducing their overall development fluency. In this paper, we use several existing command recommender algorithms to suggest new commands to developers based on their existing command usage history, and also introduce several new algorithms. By running these algorithms on data submitted by several thousand Eclipse users, we describe two studies that explore the feasibility of automatically recommending commands to software developers. The results suggest that, while recommendation is more difficult in development environments than in other domains, it is still feasible to automatically recommend commands to developers based on their usage history, and that using patterns of past discovery is a useful way to do so.",
"title": ""
},
{
"docid": "1ff5526e4a18c1e59b63a3de17101b11",
"text": "Plug-in electric vehicles (PEVs) are equipped with onboard level-1 or level-2 chargers for home overnight or office daytime charging. In addition, off-board chargers can provide fast charging for traveling long distances. However, off-board high-power chargers are bulky, expensive, and require comprehensive evolution of charging infrastructures. An integrated onboard charger capable of fast charging of PEVs will combine the benefits of both the conventional onboard and off-board chargers, without additional weight, volume, and cost. In this paper, an innovative single-phase integrated charger, using the PEV propulsion machine and its traction converter, is introduced. The charger topology is capable of power factor correction and battery voltage/current regulation without any bulky add-on components. Ac machine windings are utilized as mutually coupled inductors, to construct a two-channel interleaved boost converter. The circuit analyses of the proposed technology, based on a permanent magnet synchronous machine (PMSM), are discussed in details. Experimental results of a 3-kW proof-of-concept prototype are carried out using a ${\\textrm{220-V}}_{{\\rm{rms}}}$, 3-phase, 8-pole PMSM. A nearly unity power factor and 3.96% total harmonic distortion of input ac current are acquired with a maximum efficiency of 93.1%.",
"title": ""
},
{
"docid": "fb89fd2d9bf526b8bc7f1433274859a6",
"text": "In multidimensional image analysis, there are, and will continue to be, situations wherein automatic image segmentation methods fail, calling for considerable user assistance in the process. The main goals of segmentation research for such situations ought to be (i) to provide ffective controlto the user on the segmentation process while it is being executed, and (ii) to minimize the total user’s time required in the process. With these goals in mind, we present in this paper two paradigms, referred to aslive wireandlive lane, for practical image segmentation in large applications. For both approaches, we think of the pixel vertices and oriented edges as forming a graph, assign a set of features to each oriented edge to characterize its “boundariness,” and transform feature values to costs. We provide training facilities and automatic optimal feature and transform selection methods so that these assignments can be made with consistent effectiveness in any application. In live wire, the user first selects an initial point on the boundary. For any subsequent point indicated by the cursor, an optimal path from the initial point to the current point is found and displayed in real time. The user thus has a live wire on hand which is moved by moving the cursor. If the cursor goes close to the boundary, the live wire snaps onto the boundary. At this point, if the live wire describes the boundary appropriately, the user deposits the cursor which now becomes the new starting point and the process continues. A few points (livewire segments) are usually adequate to segment the whole 2D boundary. In live lane, the user selects only the initial point. Subsequent points are selected automatically as the cursor is moved within a lane surrounding the boundary whose width changes",
"title": ""
},
{
"docid": "8cb5659bdbe9d376e2a3b0147264d664",
"text": "Group brainstorming is widely adopted as a design method in the domain of software development. However, existing brainstorming literature has consistently proven group brainstorming to be ineffective under the controlled laboratory settings. Yet, electronic brainstorming systems informed by the results of these prior laboratory studies have failed to gain adoption in the field because of the lack of support for group well-being and member support. Therefore, there is a need to better understand brainstorming in the field. In this work, we seek to understand why and how brainstorming is actually practiced, rather than how brainstorming practices deviate from formal brainstorming rules, by observing brainstorming meetings at Microsoft. The results of this work show that, contrary to the conventional brainstorming practices, software teams at Microsoft engage heavily in the constraint discovery process in their brainstorming meetings. We identified two types of constraints that occur in brainstorming meetings. Functional constraints are requirements and criteria that define the idea space, whereas practical constraints are limitations that prioritize the proposed solutions.",
"title": ""
}
] | scidocsrr |
9252e7671f138a58239660a78a3fa033 | Agile Enterprise Architecture: a Case of a Cloud Technology-Enabled Government Enterprise Transformation | [
{
"docid": "de276ac8417b92ed155f5a9dcb5e680d",
"text": "With the development of parallel computing, distributed computing, grid computing, a new computing model appeared. The concept of computing comes from grid, public computing and SaaS. It is a new method that shares basic framework. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer or remoter server. The running of the enterprise’s data center is just like Internet. This makes the enterprise use the resource in the application that is needed, and access computer and storage system according to the requirement. This article introduces the background and principle of cloud computing, the character, style and actuality. This article also introduces the application field the merit of cloud computing, such as, it do not need user’s high level equipment, so it reduces the user’s cost. It provides secure and dependable data storage center, so user needn’t do the awful things such storing data and killing virus, this kind of task can be done by professionals. It can realize data share through different equipments. It analyses some questions and hidden troubles, and puts forward some solutions, and discusses the future of cloud computing. Cloud computing is a computing style that provide power referenced with IT as a service. Users can enjoy the service even he knows nothing about the technology of cloud computing and the professional knowledge in this field and the power to control it.",
"title": ""
},
{
"docid": "27214c91a4aa61da99084ba2a17a9a2b",
"text": "Emergency agencies (EA) rely on inter-agency approaches to information management during disasters. EA have shown a significant interest in the use of cloud-based social media such as Twitter and Facebook for crowd-sourcing and distribution of disaster information. While the intentions are clear, the question of what are its major challenges are not. EA have a need to recognise the challenges in the use of social media under their local circumstances. This paper analysed the recent literature, 2010 Haiti earthquake and 2010-11 Queensland flood cases and developed a crowd sourcing challenges assessment index construct specific to EA areas of interest. We argue that, this assessment index, as a part of our large conceptual framework of context aware cloud adaptation (CACA), can be useful for the facilitation of citizens, NGOs and government agencies in a strategy for use of social media for crowd sourcing, in preventing, preparing for, responding to and recovering from disasters.",
"title": ""
}
] | [
{
"docid": "dfe4e689e150fc9c8face64bd9628d1e",
"text": "We present and discuss a fully-automated collaboration system, CoCo, that allows multiple participants to video chat and receive feedback through custom video conferencing software. After a conferencing session, a virtual feedback assistant provides insights on the conversation to participants. CoCo automatically pulls audial and visual data during conversations and analyzes the extracted streams for affective features, including smiles, engagement, attention, as well as speech overlap and turn-taking. We validated CoCo with 39 participants split into 10 groups. Participants played two back-to-back team-building games, Lost at Sea and Survival on the Moon, with the system providing feedback between the two. With feedback, we found a statistically significant change in balanced participation---that is, everyone spoke for an equal amount of time. There was also statistically significant improvement in participants' self-evaluations of conversational skills awareness, including how often they let others speak, as well as of teammates' conversational skills. The entire framework is available at https://github.com/ROC-HCI/CollaborationCoach_PostFeedback.",
"title": ""
},
{
"docid": "3a7a7fa5e41a6195ca16f172b72f89a1",
"text": "To integrate unpredictable human behavior in the assessment of active and passive pedestrian safety systems, we introduce a virtual reality (VR)-based pedestrian simulation system. The device uses the Xsens Motion Capture platform and can be used without additional infrastructure. To show the systems applicability for pedestrian behavior studies, we conducted a pilot study evaluating the degree of realism such a system can achieve in a typical unregulated pedestrian crossing scenario. Six participants had to estimate vehicle speeds and distances in four scenarios with varying gaps between vehicles. First results indicate an acceptable level of realism so that the device can be used for further user studies addressing pedestrian behavior, pedestrian interaction with (automated) vehicles, risk assessment and investigation of the pre-crash phase without the risk of injuries.",
"title": ""
},
{
"docid": "f09bc6f1b4f37fc4d822ccc4cdc1497f",
"text": "It is generally believed that a metaphor tends to have a stronger emotional impact than a literal statement; however, there is no quantitative study establishing the extent to which this is true. Further, the mechanisms through which metaphors convey emotions are not well understood. We present the first data-driven study comparing the emotionality of metaphorical expressions with that of their literal counterparts. Our results indicate that metaphorical usages are, on average, significantly more emotional than literal usages. We also show that this emotional content is not simply transferred from the source domain into the target, but rather is a result of meaning composition and interaction of the two domains in the metaphor.",
"title": ""
},
{
"docid": "799f9ca9ea641c1893e4900fdc29c8d4",
"text": "This paper presents a large scale general purpose image database with human annotated ground truth. Firstly, an all-in-all labeling framework is proposed to group visual knowledge of three levels: scene level (global geometric description), object level (segmentation, sketch representation, hierarchical decomposition), and low-mid level (2.1D layered representation, object boundary attributes, curve completion, etc.). Much of this data has not appeared in previous databases. In addition, And-Or Graph is used to organize visual elements to facilitate top-down labeling. An annotation tool is developed to realize and integrate all tasks. With this tool, we’ve been able to create a database consisting of more than 636,748 annotated images and video frames. Lastly, the data is organized into 13 common subsets to serve as benchmarks for diverse evaluation endeavors.",
"title": ""
},
{
"docid": "eddd98b55171f658ddde1e03ea4c04df",
"text": "Over last fifteen years, robot technology has become popular in classrooms across our whole educational system. Both engineering and AI educators have developed ways to integrate robots into their teaching. Engineering educators are primarily concerned with engineering science (e.g., feedback control) and process (e.g., design skills). AI educators have different goals—namely, AI educators want students to learn AI concepts. Both agree that students are enthusiastic about working with robots, and in both cases, the pedagogical challenge is to develop robotics technology and provide classroom assignments that highlight key ideas in the respective field. Mobile robots are particularly intriguing because of their dual nature as both deterministic machines and unpredictable entities. This paper explores challenges for both engineering and AI educators as robot toolkits",
"title": ""
},
{
"docid": "59b12e15badee587c3de8657663315d1",
"text": "Thanks to their excellent performances on typical artificial intelligence problems, deep neural networks have drawn a lot of interest lately. However, this comes at the cost of large computational needs and high power consumption. Benefiting from high precision at acceptable hardware cost on these difficult problems is a challenge. To address it, we advocate the use of ternary neural networks (TNN) that, when properly trained, can reach results close to the state of the art using floatingpoint arithmetic. We present a highly versatile FPGA friendly architecture for TNN in which we can vary both the number of bits of the input data and the level of parallelism at synthesis time, allowing to trade throughput for hardware resources and power consumption. To demonstrate the efficiency of our proposal, we implement high-complexity convolutional neural networks on the Xilinx Virtex-7 VC709 FPGA board. While reaching a better accuracy than comparable designs, we can target either high throughput or low power. We measure a throughput up to 27 000 fps at ≈7W or up to 8.36 TMAC/s at ≈13 W.",
"title": ""
},
{
"docid": "9f37aaf96b8c56f0397b63a7b53776ec",
"text": "The Histogram of Oriented Gradient (HOG) descriptor has led to many advances in computer vision over the last decade and is still part of many state of the art approaches. We realize that the associated feature computation is piecewise differentiable and therefore many pipelines which build on HOG can be made differentiable. This lends to advanced introspection as well as opportunities for end-to-end optimization. We present our implementation of ΔHOG based on the auto-differentiation toolbox Chumpy [18] and show applications to pre-image visualization and pose estimation which extends the existing differentiable renderer OpenDR [19] pipeline. Both applications improve on the respective state-of-the-art HOG approaches.",
"title": ""
},
{
"docid": "ff75699519c0df47220624db263b483a",
"text": "We present BeThere, a proof-of-concept system designed to explore 3D input for mobile collaborative interactions. With BeThere, we explore 3D gestures and spatial input which allow remote users to perform a variety of virtual interactions in a local user's physical environment. Our system is completely self-contained and uses depth sensors to track the location of a user's fingers as well as to capture the 3D shape of objects in front of the sensor. We illustrate the unique capabilities of our system through a series of interactions that allow users to control and manipulate 3D virtual content. We also provide qualitative feedback from a preliminary user study which confirmed that users can complete a shared collaborative task using our system.",
"title": ""
},
{
"docid": "821b1e60e936b3f56031fae450f22dc8",
"text": "Conventional methods for seismic retrofitting of concrete columns include reinforcement with steel plates or steel frame braces, as well as cross-sectional increments and in-filled walls. However, these methods have some disadvantages, such as the increase in mass and the need for precise construction. Fiber-reinforced polymer (FRP) sheets for seismic strengthening of concrete columns using new light-weight composite materials, such as carbon fiber or glass fiber, have been developed, have excellent durability and performance, and are being widely applied to overcome the shortcomings of conventional seismic strengthening methods. Nonetheless, the FRP-sheet reinforcement method also has some drawbacks, such as the need for prior surface treatment, problems at joints, and relatively expensive material costs. In the current research, the structural and material properties associated with a new method for seismic strengthening of concrete columns using FRP were investigated. The new technique is a sprayed FRP system, achieved by mixing chopped glass and carbon fibers with epoxy and vinyl ester resin in the open air and randomly spraying the resulting mixture onto the uneven surface of the concrete columns. This paper reports on the seismic resistance of reinforced concrete columns controlled by shear strengthening using the sprayed FRP system. Five shear column specimens were designed, and then strengthened with sprayed FRP by using different combinations of short carbon or glass fibers and epoxy or vinyl ester resins. There was also a non-strengthened control specimen. Cyclic loading tests were carried out, and the ultimate load carrying capacity and deformation were investigated, as well as hysteresis in the lateral load-drift relationship. The results showed that shear strengths and deformation capacities of shear columns strengthened using sprayed FRP improved markedly, compared with those of the control column. The spraying FRP technique developed in this study can be practically and effectively used for the seismic strengthening of existing concrete columns.",
"title": ""
},
{
"docid": "4b04a4892ef7c614b3bf270f308e6984",
"text": "One reason for the universal appeal of music lies in the emotional rewards that music offers to its listeners. But what makes these rewards so special? The authors addressed this question by progressively characterizing music-induced emotions in 4 interrelated studies. Studies 1 and 2 (n=354) were conducted to compile a list of music-relevant emotion terms and to study the frequency of both felt and perceived emotions across 5 groups of listeners with distinct music preferences. Emotional responses varied greatly according to musical genre and type of response (felt vs. perceived). Study 3 (n=801)--a field study carried out during a music festival--examined the structure of music-induced emotions via confirmatory factor analysis of emotion ratings, resulting in a 9-factorial model of music-induced emotions. Study 4 (n=238) replicated this model and found that it accounted for music-elicited emotions better than the basic emotion and dimensional emotion models. A domain-specific device to measure musically induced emotions is introduced--the Geneva Emotional Music Scale.",
"title": ""
},
{
"docid": "a5c67537b72e3cd184b43c0a0e7c96b2",
"text": "These notes give a short introduction to Gaussian mixture models (GMMs) and the Expectation-Maximization (EM) algorithm, first for the specific case of GMMs, and then more generally. These notes assume you’re familiar with basic probability and basic calculus. If you’re interested in the full derivation (Section 3), some familiarity with entropy and KL divergence is useful but not strictly required. The notation here is borrowed from Introduction to Probability by Bertsekas & Tsitsiklis: random variables are represented with capital letters, values they take are represented with lowercase letters, pX represents a probability distribution for random variable X, and pX(x) represents the probability of value x (according to pX). We’ll also use the shorthand notation X 1 to represent the sequence X1, X2, . . . , Xn, and similarly x n 1 to represent x1, x2, . . . , xn. These notes follow a development somewhat similar to the one in Pattern Recognition and Machine Learning by Bishop.",
"title": ""
},
{
"docid": "ddc0b599dc2cb3672e9a2a1f5a9a9163",
"text": "Head and modifier detection is an important problem for applications that handle short texts such as search queries, ads keywords, titles, captions, etc. In many cases, short texts such as search queries do not follow grammar rules, and existing approaches for head and modifier detection are coarse-grained, domain specific, and/or require labeling of large amounts of training data. In this paper, we introduce a semantic approach for head and modifier detection. We first obtain a large number of instance level head-modifier pairs from search log. Then, we develop a conceptualization mechanism to generalize the instance level pairs to concept level. Finally, we derive weighted concept patterns that are concise, accurate, and have strong generalization power in head and modifier detection. Furthermore, we identify a subset of modifiers that we call constraints. Constraints are usually specific and not negligible as far as the intent of the short text is concerned, while non-constraint modifiers are more subjective. The mechanism we developed has been used in production for search relevance and ads matching. We use extensive experiment results to demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "a81b999f495637ba3e12799d727d872d",
"text": "The inversion of remote sensing images is crucial for soil moisture mapping in precision agriculture. However, the large size of remote sensing images complicates their management. Therefore, this study proposes a remote sensing observation sharing method based on cloud computing (ROSCC) to enhance remote sensing observation storage, processing, and service capability. The ROSCC framework consists of a cloud computing-enabled sensor observation service, web processing service tier, and a distributed database tier. Using MongoDB as the distributed database and Apache Hadoop as the cloud computing service, this study achieves a high-throughput method for remote sensing observation storage and distribution. The map, reduced algorithms and the table structure design in distributed databases are then explained. Along the Yangtze River, the longest river in China, Hubei Province was selected as the study area to test the proposed framework. Using GF-1 as a data source, an experiment was performed to enhance earth observation data (EOD) storage and achieve large-scale soil moisture mapping. The proposed ROSCC can be applied to enhance EOD sharing in cloud computing context, so as to achieve soil moisture mapping via the modified perpendicular drought index in an efficient way to better serve precision agriculture.",
"title": ""
},
{
"docid": "722bb59033ea5722b762ccac5d032235",
"text": "In this paper, we provide a real nursing data set for mobile activity recognition that can be used for supervised machine learning, and big data combined the patient medical records and sensors attempted for 2 years, and also propose a method for recognizing activities for a whole day utilizing prior knowledge about the activity segments in a day. Furthermore, we demonstrate data mining by applying our method to the bigger data with additional hospital data. In the proposed method, we 1) convert a set of segment timestamps into a prior probability of the activity segment by exploiting the concept of importance sampling, 2) obtain the likelihood of traditional recognition methods for each local time window within the segment range, and, 3) apply Bayesian estimation by marginalizing the conditional probability of estimating the activities for the segment samples. By evaluating with the dataset, the proposed method outperformed the traditional method without using the prior knowledge by 25.81% at maximum by balanced classification rate. Moreover, the proposed method significantly reduces duration errors of activity segments from 324.2 seconds of the traditional method to 74.6 seconds at maximum. We also demonstrate the data mining by applying our method to bigger data in a hospital.",
"title": ""
},
{
"docid": "37f5fcde86e30359e678ff3f957e3c7e",
"text": "A Phase I dose-proportionality study is an essential tool to understand drug pharmacokinetic dose-response relationship in early clinical development. There are a number of different approaches to the assessment of dose proportionality. The confidence interval (CI) criteria approach, a staitistically sound and clinically relevant approach, has been proposed to detect dose-proportionality (Smith, et al. 2000), by which the proportionality is declared if the 90% CI for slope is completely contained within the pre-determined critical interval. This method, enhancing the information from a clinical dose-proportionality study, has gradually drawn attention. However, exact power calculation of dose proportinality studies based on CI criteria poses difficulity for practioners since the methodology was essentailly from two one-sided tests (TOST) procedure for the slope, which should be unit under proportionality. It requires sophisticated numerical integration, and it is not available in statistical software packages. This paper presents a SAS Macro to compute the empirical power for the CI-based dose proportinality studies. The resulting sample sizes and corresponding empirical powers suggest that this approach is powerful in detecting dose-proportionality under commonly used sample sizes for phase I studies.",
"title": ""
},
{
"docid": "e9b5f3d734b364ebd9ed144719a6ac6b",
"text": "This work presents a literature review of multiple classifier systems based on the dynamic selection of classifiers. First, it briefly reviews some basic concepts and definitions related to such a classification approach and then it presents the state of the art organized according to a proposed taxonomy. In addition, a two-step analysis is applied to the results of the main methods reported in the literature, considering different classification problems. The first step is based on statistical analyses of the significance of these results. The idea is to figure out the problems for which a significant contribution can be observed in terms of classification performance by using a dynamic selection approach. The second step, based on data complexity measures, is used to investigate whether or not a relation exists between the possible performance contribution and the complexity of the classification problem. From this comprehensive study, we observed that, for some classification problems, the performance contribution of the dynamic selection approach is statistically significant when compared to that of a single-based classifier. In addition, we found evidence of a relation between the observed performance contribution and the complexity of the classification problem. These observations allow us to suggest, from the classification problem complexity, that further work should be done to predict whether or not to use a dynamic selection approach. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c7e584bca061335c8cd085511f4abb3b",
"text": "The application of boosting technique to regression problems has received relatively little attention in contrast to research aimed at classification problems. This letter describes a new boosting algorithm, AdaBoost.RT, for regression problems. Its idea is in filtering out the examples with the relative estimation error that is higher than the preset threshold value, and then following the AdaBoost procedure. Thus, it requires selecting the suboptimal value of the error threshold to demarcate examples as poorly or well predicted. Some experimental results using the M5 model tree as a weak learning machine for several benchmark data sets are reported. The results are compared to other boosting methods, bagging, artificial neural networks, and a single M5 model tree. The preliminary empirical comparisons show higher performance of AdaBoost.RT for most of the considered data sets.",
"title": ""
},
{
"docid": "c252f063dfaf75619855a51c975169d1",
"text": "Bitcoin owes its success to the fact that transactions are transparently recorded in the blockchain, a global public ledger that removes the need for trusted parties. Unfortunately, recording every transaction in the blockchain causes privacy, latency, and scalability issues. Building on recent proposals for \"micropayment channels\" --- two party associations that use the ledger only for dispute resolution --- we introduce techniques for constructing anonymous payment channels. Our proposals allow for secure, instantaneous and private payments that substantially reduce the storage burden on the payment network. Specifically, we introduce three channel proposals, including a technique that allows payments via untrusted intermediaries. We build a concrete implementation of our scheme and show that it can be deployed via a soft fork to existing anonymous currencies such as ZCash.",
"title": ""
},
{
"docid": "2702017be1794708ccec26c569a0a5ad",
"text": "Although a common pain response, whether swearing alters individuals' experience of pain has not been investigated. This study investigated whether swearing affects cold-pressor pain tolerance (the ability to withstand immersing the hand in icy water), pain perception and heart rate. In a repeated measures design, pain outcomes were assessed in participants asked to repeat a swear word versus a neutral word. In addition, sex differences and the roles of pain catastrophising, fear of pain and trait anxiety were explored. Swearing increased pain tolerance, increased heart rate and decreased perceived pain compared with not swearing. However, swearing did not increase pain tolerance in males with a tendency to catastrophise. The observed pain-lessening (hypoalgesic) effect may occur because swearing induces a fight-or-flight response and nullifies the link between fear of pain and pain perception.",
"title": ""
},
{
"docid": "69bfc5edab903692887371464d6eecb0",
"text": "In recent days text summarization had tremendous growth in all languages, especially in India regional languages. Yet the performance of such system needs improvement. This paper proposes an extractive Malayalam summarizer which reduces redundancy in summarized content and meaning of sentences are considered for summary generation. A semantic graph is created for entire document and summary generated by reducing graph using minimal spanning tree algorithm.",
"title": ""
}
] | scidocsrr |
e982cf99edeaf681206fcf5daaff79f7 | Lip reading using a dynamic feature of lip images and convolutional neural networks | [
{
"docid": "d5c4e44514186fa1d82545a107e87c94",
"text": "Recent research in computer vision has increasingly focused on building systems for observing humans and understanding their look, activities, and behavior providing advanced interfaces for interacting with humans, and creating sensible models of humans for various purposes. This paper presents a new algorithm for detecting moving objects from a static background scene based on frame difference. Firstly, the first frame is captured through the static camera and after that sequence of frames is captured at regular intervals. Secondly, the absolute difference is calculated between the consecutive frames and the difference image is stored in the system. Thirdly, the difference image is converted into gray image and then translated into binary image. Finally, morphological filtering is done to remove noise.",
"title": ""
}
] | [
{
"docid": "adb02577e7fba530c2406fbf53571d14",
"text": "Event-related potentials (ERPs) recorded from the human scalp can provide important information about how the human brain normally processes information and about how this processing may go awry in neurological or psychiatric disorders. Scientists using or studying ERPs must strive to overcome the many technical problems that can occur in the recording and analysis of these potentials. The methods and the results of these ERP studies must be published in a way that allows other scientists to understand exactly what was done so that they can, if necessary, replicate the experiments. The data must then be analyzed and presented in a way that allows different studies to be compared readily. This paper presents guidelines for recording ERPs and criteria for publishing the results.",
"title": ""
},
{
"docid": "720a3d65af4905cbffe74ab21d21dd3f",
"text": "Fluorescent carbon nanoparticles or carbon quantum dots (CQDs) are a new class of carbon nanomaterials that have emerged recently and have garnered much interest as potential competitors to conventional semiconductor quantum dots. In addition to their comparable optical properties, CQDs have the desired advantages of low toxicity, environmental friendliness low cost and simple synthetic routes. Moreover, surface passivation and functionalization of CQDs allow for the control of their physicochemical properties. Since their discovery, CQDs have found many applications in the fields of chemical sensing, biosensing, bioimaging, nanomedicine, photocatalysis and electrocatalysis. This article reviews the progress in the research and development of CQDs with an emphasis on their synthesis, functionalization and technical applications along with some discussion on challenges and perspectives in this exciting and promising field.",
"title": ""
},
{
"docid": "e86ad4e9b61df587d9e9e96ab4eb3978",
"text": "This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.",
"title": ""
},
{
"docid": "e85b5115a489835bc58a48eaa727447a",
"text": "State-of-the art machine learning methods such as deep learning rely on large sets of hand-labeled training data. Collecting training data is prohibitively slow and expensive, especially when technical domain expertise is required; even the largest technology companies struggle with this challenge. We address this critical bottleneck with Snorkel, a new system for quickly creating, managing, and modeling training sets. Snorkel enables users to generate large volumes of training data by writing labeling functions, which are simple functions that express heuristics and other weak supervision strategies. These user-authored labeling functions may have low accuracies and may overlap and conflict, but Snorkel automatically learns their accuracies and synthesizes their output labels. Experiments and theory show that surprisingly, by modeling the labeling process in this way, we can train high-accuracy machine learning models even using potentially lower-accuracy inputs. Snorkel is currently used in production at top technology and consulting companies, and used by researchers to extract information from electronic health records, after-action combat reports, and the scientific literature. In this demonstration, we focus on the challenging task of information extraction, a common application of Snorkel in practice. Using the task of extracting corporate employment relationships from news articles, we will demonstrate and build intuition for a radically different way of developing machine learning systems which allows us to effectively bypass the bottleneck of hand-labeling training data.",
"title": ""
},
{
"docid": "4eec5be6b29425e025f9e1b23b742639",
"text": "There is increasing interest in sharing the experience of products and services on the web platform, and social media has opened a way for product and service providers to understand their consumers needs and expectations. This paper explores reviews by cloud consumers that reflect consumers experiences with cloud services. The reviews of around 6,000 cloud service users were analysed using sentiment analysis to identify the attitude of each review, and to determine whether the opinion expressed was positive, negative, or neutral. The analysis used two data mining tools, KNIME and RapidMiner, and the results were compared. We developed four prediction models in this study to predict the sentiment of users reviews. The proposed model is based on four supervised machine learning algorithms: K-Nearest Neighbour (k-NN), Nave Bayes, Random Tree, and Random Forest. The results show that the Random Forest predictions achieve 97.06% accuracy, which makes this model a better prediction model than the other three.",
"title": ""
},
{
"docid": "b988525d515588da8becc18c2aa21e82",
"text": "Numerical optimization has been used as an extension of vehicle dynamics simulation in order to reproduce trajectories and driving techniques used by expert race drivers and investigate the effects of several vehicle parameters in the stability limit operation of the vehicle. In this work we investigate how different race-driving techniques may be reproduced by considering different optimization cost functions. We introduce a bicycle model with suspension dynamics and study the role of the longitudinal load transfer in limit vehicle operation, i.e., when the tires operate at the adhesion limit. Finally we demonstrate that for certain vehicle configurations the optimal trajectory may include large slip angles (drifting), which matches the techniques used by rally-race drivers.",
"title": ""
},
{
"docid": "73d3f51bdb913749665674ae8aea3a41",
"text": "Extracting and validating emotional cues through analysis of users' facial expressions is of high importance for improving the level of interaction in man machine communication systems. Extraction of appropriate facial features and consequent recognition of the user's emotional state that can be robust to facial expression variations among different users is the topic of this paper. Facial animation parameters (FAPs) defined according to the ISO MPEG-4 standard are extracted by a robust facial analysis system, accompanied by appropriate confidence measures of the estimation accuracy. A novel neurofuzzy system is then created, based on rules that have been defined through analysis of FAP variations both at the discrete emotional space, as well as in the 2D continuous activation-evaluation one. The neurofuzzy system allows for further learning and adaptation to specific users' facial expression characteristics, measured though FAP estimation in real life application of the system, using analysis by clustering of the obtained FAP values. Experimental studies with emotionally expressive datasets, generated in the EC IST ERMIS project indicate the good performance and potential of the developed technologies.",
"title": ""
},
{
"docid": "d59c6a2dd4b6bf7229d71f3ae036328a",
"text": "Community search over large graphs is a fundamental problem in graph analysis. Recent studies propose to compute top-k influential communities, where each reported community not only is a cohesive subgraph but also has a high influence value. The existing approaches to the problem of top-k influential community search can be categorized as index-based algorithms and online search algorithms without indexes. The index-based algorithms, although being very efficient in conducting community searches, need to pre-compute a specialpurpose index and only work for one built-in vertex weight vector. In this paper, we investigate online search approaches and propose an instance-optimal algorithm LocalSearch whose time complexity is linearly proportional to the size of the smallest subgraph that a correct algorithm needs to access without indexes. In addition, we also propose techniques to make LocalSearch progressively compute and report the communities in decreasing influence value order such that k does not need to be specified. Moreover, we extend our framework to the general case of top-k influential community search regarding other cohesiveness measures. Extensive empirical studies on real graphs demonstrate that our algorithms outperform the existing online search algorithms by several orders of magnitude.",
"title": ""
},
{
"docid": "fc09e1c012016c75418ec33dfe5868d5",
"text": "Big data is the word used to describe structured and unstructured data. The term big data is originated from the web search companies who had to query loosely structured very large",
"title": ""
},
{
"docid": "36787667e41db8d9c164e39a89f0c533",
"text": "This paper presents an improvement of the well-known conventional three-phase diode bridge rectifier with dc output capacitor. The proposed circuit increases the power factor (PF) at the ac input and reduces the ripple current stress on the smoothing capacitor. The basic concept is the arrangement of an active voltage source between the output of the diode bridge and the smoothing capacitor which is controlled in a way that it emulates an ideal smoothing inductor. With this the input currents of the diode bridge which usually show high peak amplitudes are converted into a 120/spl deg/ rectangular shape which ideally results in a total PF of 0.955. The active voltage source mentioned before is realized by a low-voltage switch-mode converter stage of small power rating as compared to the output power of the rectifier. Starting with a brief discussion of basic three-phase rectifier techniques and of the drawbacks of three-phase diode bridge rectifiers with capacitive smoothing, the concept of the proposed active smoothing is described and the stationary operation is analyzed. Furthermore, control concepts as well as design considerations and analyses of the dynamic systems behavior are given. Finally, measurements taken from a laboratory model are presented.",
"title": ""
},
{
"docid": "1d1cec012f9f78b40a0931ae5dea53d0",
"text": "Recursive subdivision using interval arithmetic allows us to render CSG combinations of implicit function surfaces with or without anti -aliasing, Related algorithms will solve the collision detection problem for dynamic simulation, and allow us to compute mass. center of gravity, angular moments and other integral properties required for Newtonian dynamics. Our hidden surface algorithms run in ‘constant time.’ Their running times are nearly independent of the number of primitives in a scene, for scenes in which the visible details are not much smaller than the pixels. The collision detection and integration algorithms are utterly robust — collisions are never missed due 10 numerical error and we can provide guaranteed bounds on the values of integrals. CR",
"title": ""
},
{
"docid": "c24bd4156e65d57eda0add458304988c",
"text": "Graphene is enabling a plethora of applications in a wide range of fields due to its unique electrical, mechanical, and optical properties. Among them, graphene-based plasmonic miniaturized antennas (or shortly named, graphennas) are garnering growing interest in the field of communications. In light of their reduced size, in the micrometric range, and an expected radiation frequency of a few terahertz, graphennas offer means for the implementation of ultra-short-range wireless communications. Motivated by their high radiation frequency and potentially wideband nature, this paper presents a methodology for the time-domain characterization and evaluation of graphennas. The proposed framework is highly vertical, as it aims to build a bridge between technological aspects, antenna design, and communications. Using this approach, qualitative and quantitative analyses of a particular case of graphenna are carried out as a function of two critical design parameters, namely, chemical potential and carrier mobility. The results are then compared to the performance of equivalent metallic antennas. Finally, the suitability of graphennas for ultra-short-range communications is briefly discussed.",
"title": ""
},
{
"docid": "ed509de8786ee7b4ba0febf32d0c87f7",
"text": "Threat detection and analysis are indispensable processes in today's cyberspace, but current state of the art threat detection is still limited to specific aspects of modern malicious activities due to the lack of information to analyze. By measuring and collecting various types of data, from traffic information to human behavior, at different vantage points for a long duration, the viewpoint seems to be helpful to deeply inspect threats, but faces scalability issues as the amount of collected data grows, since more computational resources are required for the analysis. In this paper, we report our experience from operating the Hadoop platform, called MATATABI, for threat detections, and present the micro-benchmarks with four different backends of data processing in typical use cases such as log data and packet trace analysis. The benchmarks demonstrate the advantages of distributed computation in terms of performance. Our extensive use cases of analysis modules showcase the potential benefit of deploying our threat analysis platform.",
"title": ""
},
{
"docid": "90f188c1f021c16ad7c8515f1244c08a",
"text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.",
"title": ""
},
{
"docid": "895d5b01e984ef072b834976e0dfe378",
"text": "Cross-lingual or cross-domain correspondences play key roles in tasks ranging from machine translation to transfer learning. Recently, purely unsupervised methods operating on monolingual embeddings have become effective alignment tools. Current state-of-theart methods, however, involve multiple steps, including heuristic post-hoc refinement strategies. In this paper, we cast the correspondence problem directly as an optimal transport (OT) problem, building on the idea that word embeddings arise from metric recovery algorithms. Indeed, we exploit the GromovWasserstein distance that measures how similarities between pairs of words relate across languages. We show that our OT objective can be estimated efficiently, requires little or no tuning, and results in performance comparable with the state-of-the-art in various unsupervised word translation tasks.",
"title": ""
},
{
"docid": "caf866341ad9f74b1ac1dc8572f6e95c",
"text": "One important but often overlooked aspect of human contexts of ubiquitous computing environment is human’s emotional status. And, there are no realistic and robust humancentric contents services so far, because there are few considers about combining context awareness computing with wearable computing for improving suitability of contents to each user’s needs. In this paper, we discuss combining context awareness computing with wearable computing to develop more effective personalized services. And we propose new algorithms to develop efficiently personalized emotion based content service system.",
"title": ""
},
{
"docid": "ec26505d813ed98ac3f840ea54358873",
"text": "In this paper we address cardinality estimation problem which is an important subproblem in query optimization. Query optimization is a part of every relational DBMS responsible for finding the best way of the execution for the given query. These ways are called plans. The execution time of different plans may differ by several orders, so query optimizer has a great influence on the whole DBMS performance. We consider cost-based query optimization approach as the most popular one. It was observed that costbased optimization quality depends much on cardinality estimation quality. Cardinality of the plan node is the number of tuples returned by it. In the paper we propose a novel cardinality estimation approach with the use of machine learning methods. The main point of the approach is using query execution statistics of the previously executed queries to improve cardinality estimations. We called this approach adaptive cardinality estimation to reflect this point. The approach is general, flexible, and easy to implement. The experimental evaluation shows that this approach significantly increases the quality of cardinality estimation, and therefore increases the DBMS performance for some queries by several times or even by several dozens of times.",
"title": ""
},
{
"docid": "06ba0cd00209a7f4f200395b1662003e",
"text": "Changes in human DNA methylation patterns are an important feature of cancer development and progression and a potential role in other conditions such as atherosclerosis and autoimmune diseases (e.g., multiple sclerosis and lupus) is being recognised. The cancer genome is frequently characterised by hypermethylation of specific genes concurrently with an overall decrease in the level of 5 methyl cytosine. This hypomethylation of the genome largely affects the intergenic and intronic regions of the DNA, particularly repeat sequences and transposable elements, and is believed to result in chromosomal instability and increased mutation events. This review examines our understanding of the patterns of cancer-associated hypomethylation, and how recent advances in understanding of chromatin biology may help elucidate the mechanisms underlying repeat sequence demethylation. It also considers how global demethylation of repeat sequences including transposable elements and the site-specific hypomethylation of certain genes might contribute to the deleterious effects that ultimately result in the initiation and progression of cancer and other diseases. The use of hypomethylation of interspersed repeat sequences and genes as potential biomarkers in the early detection of tumors and their prognostic use in monitoring disease progression are also examined.",
"title": ""
},
{
"docid": "ff08d2e0d53f2d9a7d49f0fdd820ec7a",
"text": "Milk contains numerous nutrients. The content of n-3 fatty acids, the n-6/n-3 ratio, and short- and medium-chain fatty acids may promote positive health effects. In Western societies, cow’s milk fat is perceived as a risk factor for health because it is a source of a high fraction of saturated fatty acids. Recently, there has been increasing interest in donkey’s milk. In this work, the fat and energetic value and acidic composition of donkey’s milk, with reference to human nutrition, and their variations during lactation, were investigated. We also discuss the implications of the acidic profile of donkey’s milk on human nutrition. Individual milk samples from lactating jennies were collected 15, 30, 45, 60, 90, 120, 150, 180 and 210days after foaling, for the analysis of fat, proteins and lactose, which was achieved using an infrared milk analyser, and fatty acids composition by gas chromatography. The donkey’s milk was characterised by low fat and energetic (1719.2kJ·kg-1) values, a high polyunsaturated fatty acids (PUFA) content of mainly α-linolenic acid (ALA) and linoleic acid (LA), a low n-6 to n-3 FA ratio or LA/ALA ratio, and advantageous values of atherogenic and thrombogenic indices. Among the minor PUFA, docosahesaenoic (DHA), eicosapentanoic (EPA), and arachidonic (AA) acids were present in very small amounts (<1%). In addition, the AA/EPA ratio was low (0.18). The fat and energetic values decreased (P < 0.01) during lactation. The fatty acid patterns were affected by the lactation stage and showed a decrease (P < 0.01) in saturated fatty acids content and an increase (P < 0.01) in the unsaturated fatty acids content. The n-6 to n-3 ratio and the LA/ALA ratio were approximately 2:1, with values <1 during the last period of lactation, suggesting the more optimal use of milk during this period. The high level of unsaturated/saturated fatty acids and PUFA-n3 content and the low n-6/n-3 ratio suggest the use of donkey’s milk as a functional food for human nutrition and its potential utilisation for infant nutrition as well as adult diets, particular for the elderly.",
"title": ""
},
{
"docid": "5daeccb1a01df4f68f23c775828be41d",
"text": "This article surveys the research and development of Engineered Cementitious Composites (ECC) over the last decade since its invention in the early 1990’s. The importance of micromechanics in the materials design strategy is emphasized. Observations of unique characteristics of ECC based on a broad range of theoretical and experimental research are examined. The advantageous use of ECC in certain categories of structural, and repair and retrofit applications is reviewed. While reflecting on past advances, future challenges for continued development and deployment of ECC are noted. This article is based on a keynote address given at the International Workshop on Ductile Fiber Reinforced Cementitious Composites (DFRCC) – Applications and Evaluations, sponsored by the Japan Concrete Institute, and held in October 2002 at Takayama, Japan.",
"title": ""
}
] | scidocsrr |
6dc8bd3bc0c04c92fc132f2697cdf226 | Combining control-flow integrity and static analysis for efficient and validated data sandboxing | [
{
"docid": "83c81ecb870e84d4e8ab490da6caeae2",
"text": "We introduceprogram shepherding, a method for monitoring control flow transfers during program execution to enforce a security policy. Shepherding ensures that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Shepherding can also enforce entry points as the only way to execute shared library code. Furthermore, shepherding guarantees that sandboxing checks around any type of program operation will never be bypassed. We have implemented these capabilities efficiently in a runtime system with minimal or no performance penalties. This system operates on unmodified native binaries, requires no special hardware or operating system support, and runs on existing IA-32 machines.",
"title": ""
}
] | [
{
"docid": "d945ae2fe20af58c2ca4812c797d361d",
"text": "Triple-negative breast cancers (TNBC) are genetically characterized by aberrations in TP53 and a low rate of activating point mutations in common oncogenes, rendering it challenging in applying targeted therapies. We performed whole-exome sequencing (WES) and RNA sequencing (RNA-seq) to identify somatic genetic alterations in mouse models of TNBCs driven by loss of Trp53 alone or in combination with Brca1 Amplifications or translocations that resulted in elevated oncoprotein expression or oncoprotein-containing fusions, respectively, as well as frameshift mutations of tumor suppressors were identified in approximately 50% of the tumors evaluated. Although the spectrum of sporadic genetic alterations was diverse, the majority had in common the ability to activate the MAPK/PI3K pathways. Importantly, we demonstrated that approved or experimental drugs efficiently induce tumor regression specifically in tumors harboring somatic aberrations of the drug target. Our study suggests that the combination of WES and RNA-seq on human TNBC will lead to the identification of actionable therapeutic targets for precision medicine-guided TNBC treatment.Significance: Using combined WES and RNA-seq analyses, we identified sporadic oncogenic events in TNBC mouse models that share the capacity to activate the MAPK and/or PI3K pathways. Our data support a treatment tailored to the genetics of individual tumors that parallels the approaches being investigated in the ongoing NCI-MATCH, My Pathway Trial, and ESMART clinical trials. Cancer Discov; 8(3); 354-69. ©2017 AACR.See related commentary by Natrajan et al., p. 272See related article by Matissek et al., p. 336This article is highlighted in the In This Issue feature, p. 253.",
"title": ""
},
{
"docid": "e2ce393fade02f0dfd20b9aca25afd0f",
"text": "This paper presents a comparative lightning performance study conducted on a 275 kV double circuit shielded transmission line using two software programs, TFlash and Sigma-Slp. The line performance was investigated by using both a single stroke and a statistical performance analysis and considering cases of shielding failure and backflashover. A sensitivity analysis was carried out to determine the relationship between the flashover rate and the parameters influencing it. To improve the lightning performance of the line, metal oxide surge arresters were introduced using different phase and line locations. Optimised arrester arrangements are proposed.",
"title": ""
},
{
"docid": "42b810b7ecd48590661cc5a538bec427",
"text": "Most algorithms that rely on deep learning-based approaches to generate 3D point sets can only produce clouds containing fixed number of points. Furthermore, they typically require large networks parameterized by many weights, which makes them hard to train. In this paper, we propose an auto-encoder architecture that can both encode and decode clouds of arbitrary size and demonstrate its effectiveness at upsampling sparse point clouds. Interestingly, we can do so using less than half as many parameters as state-of-the-art architectures while still delivering better performance. We will make our code base fully available.",
"title": ""
},
{
"docid": "ca41837dd01a66259854c03b820a46ff",
"text": "We present a supervised sequence to sequence transduction model with a hard attention mechanism which combines the more traditional statistical alignment methods with the power of recurrent neural networks. We evaluate the model on the task of morphological inflection generation and show that it provides state of the art results in various setups compared to the previous neural and non-neural approaches. Eventually we present an analysis of the learned representations for both hard and soft attention models, shedding light on the features such models extract in order to solve the task.",
"title": ""
},
{
"docid": "05d8383eb6b1c6434f75849859c35fd0",
"text": "This paper proposes a robust approach for image based floor detection and segmentation from sequence of images or video. In contrast to many previous approaches, which uses a priori knowledge of the surroundings, our method uses combination of modified sparse optical flow and planar homography for ground plane detection which is then combined with graph based segmentation for extraction of floor from images. We also propose a probabilistic framework which makes our method adaptive to the changes in the surroundings. We tested our algorithm on several common indoor environment scenarios and were able to extract floor even under challenging circumstances. We obtained extremely satisfactory results in various practical scenarios such as where the floor and non floor areas are of same color, in presence of textured flooring, and where illumination changes are steep.",
"title": ""
},
{
"docid": "f91ba4b37a2a9d80e5db5ace34e6e50a",
"text": "Bearing currents and shaft voltages of an induction motor are measured under hardand soft-switching inverter excitation. The objective is to investigate whether the soft-switching technologies can provide solutions for reducing the bearing currents and shaft voltages. Two of the prevailing soft-switching inverters, the resonant dc-link inverter and the quasi-resonant dc-link inverter, are tested. The results are compared with those obtained using the conventional hard-switching inverter. To ensure objective comparisons between the softand hard-switching inverters, all inverters were configured identically and drove the same induction motor under the same operating conditions when the test data were collected. An insightful explanation of the experimental results is also provided to help understand the mechanisms of bearing currents and shaft voltages produced in the inverter drives. Consistency between the bearing current theory and the experimental results has been demonstrated. Conclusions are then drawn regarding the effectiveness of the soft-switching technologies as a solution to the bearing current and shaft voltage problems.",
"title": ""
},
{
"docid": "3eaba817610278c4b1a82036ccfb6cc4",
"text": "We propose to use thought-provoking children's questions (TPCQs), namely Highlights BrainPlay questions, to drive artificial intelligence research. These questions are designed to stimulate thought and learning in children , and they can be used to do the same thing in AI systems. We introduce the TPCQ task, which consists of taking a TPCQ question as input and producing as output both (1) answers to the question and (2) learned generalizations. We discuss how BrainPlay questions stimulate learning. We analyze 244 BrainPlay questions, and we report statistics on question type, question class, answer cardinality, answer class, types of knowledge needed, and types of reasoning needed. We find that BrainPlay questions span many aspects of intelligence. We envision an AI system based on the society of mind (Minsky 1986; Minsky 2006) consisting of a multilevel architecture with diverse resources that run in parallel to jointly answer and learn from questions. Because the answers to BrainPlay questions and the generalizations learned from them are often highly open-ended, we suggest using human judges for evaluation.",
"title": ""
},
{
"docid": "b4b20c33b7f683cfead2fede8088f09b",
"text": "Bus protection is typically a station-wide protection function, as it uses the majority of the high voltage (HV) electrical signals available in a substation. All current measurements that define the bus zone of protection are needed. Voltages may be included in bus protection relays, as the number of voltages is relatively low, so little additional investment is not needed to integrate them into the protection system. This paper presents a new Distributed Bus Protection System that represents a step forward in the concept of a Smart Substation solution. This Distributed Bus Protection System has been conceived not only as a protection system, but as a platform that incorporates the data collection from the HV equipment in an IEC 61850 process bus scheme. This new bus protection system is still a distributed bus protection solution. As opposed to dedicated bay units, this system uses IEC 61850 process interface units (that combine both merging units and contact I/O) for data collection. The main advantage then, is that as the bus protection is deployed, it is also deploying the platform to do data collection for other protection, control, and monitoring functions needed in the substation, such as line, transformer, and feeder. By installing the data collection pieces, this provides for the simplification of engineering tasks, and substantial savings in wiring, number of components, cabinets, installation, and commissioning. In this way the new bus protection system is the gateway to process bus, as opposed to an addon to a process bus system. The paper analyzes and describes the new Bus Protection System as a new conceptual design for a Smart Substation, highlighting the advantages in a vision that comprises not only a single element, but the entire installation. Keyword: Current Transformer, Digital Fault Recorder, Fiber Optic Cable, International Electro Technical Commission, Process Interface Units",
"title": ""
},
{
"docid": "ca6001c3ed273b4f23565f4d40ddeb29",
"text": "Learning semantic representations and tree structures of bilingual phrases is beneficial for statistical machine translation. In this paper, we propose a new neural network model called Bilingual Correspondence Recursive Autoencoder (BCorrRAE) to model bilingual phrases in translation. We incorporate word alignments into BCorrRAE to allow it freely access bilingual constraints at different levels. BCorrRAE minimizes a joint objective on the combination of a recursive autoencoder reconstruction error, a structural alignment consistency error and a crosslingual reconstruction error so as to not only generate alignment-consistent phrase structures, but also capture different levels of semantic relations within bilingual phrases. In order to examine the effectiveness of BCorrRAE, we incorporate both semantic and structural similarity features built on bilingual phrase representations and tree structures learned by BCorrRAE into a state-of-the-art SMT system. Experiments on NIST Chinese-English test sets show that our model achieves a substantial improvement of up to 1.55 BLEU points over the baseline.",
"title": ""
},
{
"docid": "f698b77df48a5fac4df7ba81b4444dd5",
"text": "Discontinuous-conduction mode (DCM) operation is usually employed in DC-DC converters for small inductor on printed circuit board (PCB) and high efficiency at light load. However, it is normally difficult for synchronous converter to realize the DCM operation, especially in high frequency applications, which requires a high speed and high precision comparator to detect the zero crossing point at cost of extra power losses. In this paper, a novel zero current detector (ZCD) circuit with an adaptive delay control loop for high frequency synchronous buck converter is presented. Compared to the conventional ZCD, proposed technique is proven to offer 8.5% efficiency enhancement when performed in a buck converter at the switching frequency of 4MHz and showed less sensitivity to the transistor mismatch of the sensor circuit.",
"title": ""
},
{
"docid": "5bebef3a6ca0d595b6b3232e18f8789f",
"text": "The usability of a software product has recently become a key software quality factor. The International Organization for Standardization (ISO) has developed a variety of models to specify and measure software usability but these individual models do not support all usability aspects. Furthermore, they are not yet well integrated into current software engineering practices and lack tool support. The aim of this research is to survey the actual representation (meanings and interpretations) of usability in ISO standards, indicate some of existing limitations and address them by proposing an enhanced, normative model for the evaluation of software usability.",
"title": ""
},
{
"docid": "bac623d79d39991032fc46cc215b9fdd",
"text": "The convergence of mobile computing and cloud computing enables new mobile applications that are both resource-intensive and interactive. For these applications, end-to-end network bandwidth and latency matter greatly when cloud resources are used to augment the computational power and battery life of a mobile device. This dissertation designs and implements a new architectural element called a cloudlet, that arises from the convergence of mobile computing and cloud computing. Cloudlets represent the middle tier of a 3-tier hierarchy, mobile device — cloudlet — cloud, to achieve the right balance between cloud consolidation and network responsiveness. We first present quantitative evidence that shows cloud location can affect the performance of mobile applications and cloud consolidation. We then describe an architectural solution using cloudlets that are a seamless extension of todays cloud computing infrastructure. Finally, we define minimal functionalities that cloudlets must offer above/beyond standard cloud computing, and address corresponding technical challenges.",
"title": ""
},
{
"docid": "0b71458d700565bec9b91318023243df",
"text": "The Humor Styles Questionnaire (HSQ; Martin et al., 2003) is one of the most frequently used questionnaires in humor research and has been adapted to several languages. The HSQ measures four humor styles (affiliative, self-enhancing, aggressive, and self-defeating), which should be adaptive or potentially maladaptive to psychosocial well-being. The present study analyzes the internal consistency, factorial validity, and factorial invariance of the HSQ on the basis of several German-speaking samples combined (total N = 1,101). Separate analyses were conducted for gender (male/female), age groups (16-24, 25-35, >36 years old), and countries (Germany/Switzerland). Internal consistencies were good for the overall sample and the demographic subgroups (.80-.89), with lower values obtained for the aggressive scale (.66-.73). Principal components and confirmatory factor analyses mostly supported the four-factor structure of the HSQ. Weak factorial invariance was found across gender and age groups, while strong factorial invariance was supported across countries. Two subsamples also provided self-ratings on ten styles of humorous conduct (n = 344) and of eight comic styles (n = 285). The four HSQ scales showed small to large correlations to the styles of humorous conduct (-.54 to .65) and small to medium correlations to the comic styles (-.27 to .42). The HSQ shared on average 27.5-35.0% of the variance with the styles of humorous conduct and 13.0-15.0% of the variance with the comic styles. Thus-despite similar labels-these styles of humorous conduct and comic styles differed from the HSQ humor styles.",
"title": ""
},
{
"docid": "e677799d3bee1b25e74dc6c547c1b6c2",
"text": "Street View serves millions of Google users daily with panoramic imagery captured in hundreds of cities in 20 countries across four continents. A team of Google researchers describes the technical challenges involved in capturing, processing, and serving street-level imagery on a global scale.",
"title": ""
},
{
"docid": "fdaf0a7bc6dfa30d0c3ed3a96950d8c8",
"text": "In this article we exploit the discrete-time dynamics of a single neuron with self-connection to systematically design simple signal filters. Due to hysteresis effects and transient dynamics, this single neuron behaves as an adjustable low-pass filter for specific parameter configurations. Extending this neuro-module by two more recurrent neurons leads to versatile highand band-pass filters. The approach presented here helps to understand how the dynamical properties of recurrent neural networks can be used for filter design. Furthermore, it gives guidance to a new way of implementing sensory preprocessing for acoustic signal recognition in autonomous robots.",
"title": ""
},
{
"docid": "2af0ef7c117ace38f44a52379c639e78",
"text": "Examination of a child with genital or anal disease may give rise to suspicion of sexual abuse. Dermatologic, traumatic, infectious, and congenital disorders may be confused with sexual abuse. Seven children referred to us are representative of such confusion.",
"title": ""
},
{
"docid": "52017fa7d6cf2e6a18304b121225fc6f",
"text": "In comparison to dense matrices multiplication, sparse matrices multiplication real performance for CPU is roughly 5–100 times lower when expressed in GFLOPs. For sparse matrices, microprocessors spend most of the time on comparing matrices indices rather than performing floating-point multiply and add operations. For 16-bit integer operations, like indices comparisons, computational power of the FPGA significantly surpasses that of CPU. Consequently, this paper presents a novel theoretical study how matrices sparsity factor influences the indices comparison to floating-point operation workload ratio. As a result, a novel FPGAs architecture for sparse matrix-matrix multiplication is presented for which indices comparison and floating-point operations are separated. We also verified our idea in practice, and the initial implementations results are very promising. To further decrease hardware resources required by the floating-point multiplier, a reduced width multiplication is proposed in the case when IEEE-754 standard compliance is not required.",
"title": ""
},
{
"docid": "6341eaeb32d0e25660de6be6d3943e81",
"text": "Theorists have speculated that primary psychopathy (or Factor 1 affective-interpersonal features) is prominently heritable whereas secondary psychopathy (or Factor 2 social deviance) is more environmentally determined. We tested this differential heritability hypothesis using a large adolescent twin sample. Trait-based proxies of primary and secondary psychopathic tendencies were assessed using Multidimensional Personality Questionnaire (MPQ) estimates of Fearless Dominance and Impulsive Antisociality, respectively. The environmental contexts of family, school, peers, and stressful life events were assessed using multiple raters and methods. Consistent with prior research, MPQ Impulsive Antisociality was robustly associated with each environmental risk factor, and these associations were significantly greater than those for MPQ Fearless Dominance. However, MPQ Fearless Dominance and Impulsive Antisociality exhibited similar heritability, and genetic effects mediated the associations between MPQ Impulsive Antisociality and the environmental measures. Results were largely consistent across male and female twins. We conclude that gene-environment correlations rather than main effects of genes and environments account for the differential environmental correlates of primary and secondary psychopathy.",
"title": ""
},
{
"docid": "47ef46ef69a23e393d8503154f110a81",
"text": "Question answering (Q&A) communities have been gaining popularity in the past few years. The success of such sites depends mainly on the contribution of a small number of expert users who provide a significant portion of the helpful answers, and so identifying users that have the potential of becoming strong contributers is an important task for owners of such communities.\n We present a study of the popular Q&A website StackOverflow (SO), in which users ask and answer questions about software development, algorithms, math and other technical topics. The dataset includes information on 3.5 million questions and 6.9 million answers created by 1.3 million users in the years 2008--2012. Participation in activities on the site (such as asking and answering questions) earns users reputation, which is an indicator of the value of that user to the site.\n We describe an analysis of the SO reputation system, and the participation patterns of high and low reputation users. The contributions of very high reputation users to the site indicate that they are the primary source of answers, and especially of high quality answers. Interestingly, we find that while the majority of questions on the site are asked by low reputation users, on average a high reputation user asks more questions than a user with low reputation. We consider a number of graph analysis methods for detecting influential and anomalous users in the underlying user interaction network, and find they are effective in detecting extreme behaviors such as those of spam users. Lastly, we show an application of our analysis: by considering user contributions over first months of activity on the site, we predict who will become influential long-term contributors.",
"title": ""
},
{
"docid": "028be19d9b8baab4f5982688e41bfec8",
"text": "The activation function for neurons is a prominent element in the deep learning architecture for obtaining high performance. Inspired by neuroscience findings, we introduce and define two types of neurons with different activation functions for artificial neural networks: excitatory and inhibitory neurons, which can be adaptively selected by selflearning. Based on the definition of neurons, in the paper we not only unify the mainstream activation functions, but also discuss the complementariness among these types of neurons. In addition, through the cooperation of excitatory and inhibitory neurons, we present a compositional activation function that leads to new state-of-the-art performance comparing to rectifier linear units. Finally, we hope that our framework not only gives a basic unified framework of the existing activation neurons to provide guidance for future design, but also contributes neurobiological explanations which can be treated as a window to bridge the gap between biology and computer science.",
"title": ""
}
] | scidocsrr |
ec3c9b3126a6eef574a0668a06629594 | Comparison of Unigram, Bigram, HMM and Brill's POS tagging approaches for some South Asian languages | [
{
"docid": "89aa60cefe11758e539f45c5cba6f48a",
"text": "For undergraduate or advanced undergraduate courses in Classical Natural Language Processing, Statistical Natural Language Processing, Speech Recognition, Computational Linguistics, and Human Language Processing. An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology at all levels and with all modern technologies this text takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. The authors cover areas that traditionally are taught in different courses, to describe a unified vision of speech and language processing. Emphasis is on practical applications and scientific evaluation. An accompanying Website contains teaching materials for instructors, with pointers to language processing resources on the Web. The Second Edition offers a significant amount of new and extended material. Supplements: Click on the \"Resources\" tab to View Downloadable Files:Solutions Power Point Lecture Slides Chapters 1-5, 8-10, 12-13 and 24 Now Available! For additional resourcse visit the author website: http://www.cs.colorado.edu/~martin/slp.html",
"title": ""
}
] | [
{
"docid": "b428ee2a14b91fee7bb80058e782774d",
"text": "Recurrent connectionist networks are important because they can perform temporally extended tasks, giving them considerable power beyond the static mappings performed by the now-familiar multilayer feedforward networks. This ability to perform highly nonlinear dynamic mappings makes these networks particularly interesting to study and potentially quite useful in tasks which have an important temporal component not easily handled through the use of simple tapped delay lines. Some examples are tasks involving recognition or generation of sequential patterns and sensorimotor control. This report examines a number of learning procedures for adjusting the weights in recurrent networks in order to train such networks to produce desired temporal behaviors from input-output stream examples. The procedures are all based on the computation of the gradient of performance error with respect to network weights, and a number of strategies for computing the necessary gradient information are described. Included here are approaches which are familiar and have been rst described elsewhere, along with several novel approaches. One particular purpose of this report is to provide uniform and detailed descriptions and derivations of the various techniques in order to emphasize how they relate to one another. Another important contribution of this report is a detailed analysis of the computational requirements of the various approaches discussed.",
"title": ""
},
{
"docid": "4cb25adf48328e1e9d871940a97fdff2",
"text": "This article is concerned with parameters identification problems and computer modeling of thrust generation subsystem for small unmanned aerial vehicles (UAV) quadrotor type. In this paper approach for computer model generation of dynamic process of thrust generation subsystem that consists of fixed pitch propeller, EC motor and power amplifier, is considered. Due to the fact that obtainment of aerodynamic characteristics of propeller via analytical approach is quite time-consuming, and taking into account that subsystem consists of as well as propeller, motor and power converter with microcontroller control system, which operating algorithm is not always available from manufacturer, receiving trusted computer model of thrust generation subsystem via analytical approach is impossible. Identification of the system under investigation is performed from the perspective of “black box” with the known qualitative description of proceeded there dynamic processes. For parameters identification of subsystem special laboratory rig that described in this paper was designed.",
"title": ""
},
{
"docid": "3e570e415690daf143ea30a8554b0ac8",
"text": "Innovative technology on intelligent processes for smart home applications that utilize Internet of Things (IoT) is mainly limited and dispersed. The available trends and gaps were investigated in this study to provide valued visions for technical environments and researchers. Thus, a survey was conducted to create a coherent taxonomy on the research landscape. An extensive search was conducted for articles on (a) smart homes, (b) IoT and (c) applications. Three databases, namely, IEEE Explore, ScienceDirect and Web of Science, were used in the article search. These databases comprised comprehensive literature that concentrate on IoT-based smart home applications. Subsequently, filtering process was achieved on the basis of intelligent processes. The final classification scheme outcome of the dataset contained 40 articles that were classified into four classes. The first class includes the knowledge engineering process that examines data representation to identify the means of accomplishing a task for IoT applications and their utilisation in smart homes. The second class includes papers on the detection process that uses artificial intelligence (AI) techniques to capture the possible changes in IoT-based smart home applications. The third class comprises the analytical process that refers to the use of AI techniques to understand the underlying problems in smart homes by inferring new knowledge and suggesting appropriate solutions for the problem. The fourth class comprises the control process that describes the process of measuring and instructing the performance of IoT-based smart home applications against the specifications with the involvement of intelligent techniques. The basic features of this evolving approach were then identified in the aspects of motivation of intelligent process utilisation for IoT-based smart home applications and open-issue restriction utilisation. The recommendations for the approval and utilisation of intelligent process for IoT-based smart home applications were also determined from the literature.",
"title": ""
},
{
"docid": "5288f4bbc2c9b8531042ce25b8df05b0",
"text": "Existing neural machine translation systems do not explicitly model what has been translated and what has not during the decoding phase. To address this problem, we propose a novel mechanism that separates the source information into two parts: translated Past contents and untranslated Future contents, which are modeled by two additional recurrent layers. The Past and Future contents are fed to both the attention model and the decoder states, which provides Neural Machine Translation (NMT) systems with the knowledge of translated and untranslated contents. Experimental results show that the proposed approach significantly improves the performance in Chinese-English, German-English, and English-German translation tasks. Specifically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate.",
"title": ""
},
{
"docid": "997a0392359ae999dfca6a0d339ea27f",
"text": "Five types of anomalous behaviour which may occur in paged virtual memory operating systems are defined. One type of anomaly, for example, concerns the fact that, with certain reference strings and paging algorithms, an increase in mean memory allocation may result in an increase in fault rate. Two paging algorithms, the page fault frequency and working set algorithms, are examined in terms of their anomaly potential, and reference string examples of various anomalies are presented. Two paging algorithm properties, the inclusion property and the generalized inclusion property, are discussed and the anomaly implications of these properties presented.",
"title": ""
},
{
"docid": "112f10eb825a484850561afa7c23e71f",
"text": "We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.",
"title": ""
},
{
"docid": "13150a58d86b796213501d26e4b41e5b",
"text": "In this work, CoMoO4@NiMoO4·xH2O core-shell heterostructure electrode is directly grown on carbon fabric (CF) via a feasible hydrothermal procedure with CoMoO4 nanowires (NWs) as the core and NiMoO4 nanosheets (NSs) as the shell. This core-shell heterostructure could provide fast ion and electron transfer, a large number of active sites, and good strain accommodation. As a result, the CoMoO4@NiMoO4·xH2O electrode yields high-capacitance performance with a high specific capacitance of 1582 F g-1, good cycling stability with the capacitance retention of 97.1% after 3000 cycles and good rate capability. The electrode also shows excellent mechanical flexibility. Also, a flexible Fe2O3 nanorods/CF electrode with enhanced electrochemical performance was prepared. A solid-state asymmetric supercapacitor device is successfully fabricated by using flexible CoMoO4@NiMoO4·xH2O as the positive electrode and Fe2O3 as the negative electrode. The asymmetric supercapacitor with a maximum voltage of 1.6 V demonstrates high specific energy (41.8 Wh kg-1 at 700 W kg-1), high power density (12000 W kg-1 at 26.7 Wh kg-1), and excellent cycle ability with the capacitance retention of 89.3% after 5000 cycles (at the current density of 3A g-1).",
"title": ""
},
{
"docid": "27d8022f6545503c1145d46dfd30c1db",
"text": "Research has demonstrated support for objectification theory and has established that music affects listeners’ thoughts and behaviors, however, no research to date joins these two fields. The present study considers potential effects of objectifying hip hop songs on female listeners. Among African American participants, exposure to an objectifying song resulted in increased self-objectification. However, among White participants, exposure to an objectifying song produced no measurable difference in self-objectification. This finding along with interview data suggests that white women distance themselves from objectifying hip hop songs, preventing negative effects of such music. EFFECTS OF OBJECTIFYING HIP HOP 3 The Effects of Objectifying Hip-Hop Lyrics on Female Listeners Music is an important part of adolescents’ and young adults’ lives. It is a way to learn about our social world, express emotions, and relax (Agbo-Quaye, 2010). Music today is highly social, shared and listened to in social situations as a way to bolster the mood or experience. However, the effects of music are not always positive. Considering this, how does music affect young adults? Specifically, how does hip-hop music with objectifying lyrics affect female listeners? To begin to answer this question, I will first present previous research on music’s effects, specifically the effects of aggressive, sexualized, and misogynistic lyrics. Next, I will discuss theories regarding the processing of lyrics. Another important aspect of this question is objectification theory, thus I will explain this theory and the evidence to support it. I will then discuss further applications of this theory to various visual media forms. Finally, I will describe gaps in research, as well as the importance of this study. Multiple studies have looked at the effects of music’s lyrics on listeners. Various aspects and trends in popular music have been considered. Anderson, Carnagey, and Eubanks (2003) examined the effects of songs with violent lyrics on listeners. Participants who had been exposed to songs with violent lyrics reported feeling more hostile than those who listened to songs with non-violent lyrics. Those exposed to violent lyrics also had an increase in aggressive thoughts. Researchers also considered trait hostility and found that, although correlated with state hostility, it did not account for the differences in condition. Other studies have explored music’s effects on behavior. One such study considered the effects of exposure to sexualized lyrics (Carpentier, Knobloch-Westerwick, & Blumhoff, 2007). After exposure to overtly sexualized pop lyrics, participants rated potential romantic partners EFFECTS OF OBJECTIFYING HIP HOP 4 with a stronger emphasis on sexual appeal in comparison to the ratings of those participants who heard nonsexual pop songs. Another study exposed male participants to either sexually aggressive misogynistic lyrics or neutral lyrics (Fischer & Greitemeyer, 2006). Those participants who had been exposed to the sexually aggressive lyrics demonstrated more aggressive behaviors towards females. The study was replicated with female participants and aggressive man-hating lyrics and similar results were found. Similarly, another study found that exposure to misogynous rap music influenced sexually aggressive behaviors (Barongan & Hall, 1995). Participants were exposed to either misogynous or neutral rap songs and then presented with three vignettes and were informed they would have to select one to share with a female confederate. Those who listened to the misogynous song selected the assaultive vignette at a significantly higher rate. The selection of the assaultive vignette demonstrated sexually aggressive behavior. These studies demonstrate the real and disturbing effects that music can have on listener’s behaviors. There are multiple theories as to why these lyrical effects are found. Some researchers suggest that social learning and cultivation theories are responsible (Sprankle & End, 2009). Both theories argue that our thoughts and our actions are influenced by what we see. Social learning theory suggests that observing others’ behaviors and the responses they receive will influence the observer’s behavior. As most rap music depicts the positive outcomes of increased sexual activity and objectification of women and downplays or omits the negative outcomes, listeners will start to engage in these activities and consider them acceptable. Cultivation theory argues that the more a person observes the world of sex portrayed in objectifying music, the more likely they are to believe that that world is reality. That is, the more they see “evidence” of EFFECTS OF OBJECTIFYING HIP HOP 5 the attitudes and behaviors portrayed in hip hop, the more likely they are to believe that such behaviors are normal. Cobb and Boettcher (2007) suggest that theories of priming and social stereotyping support the findings that exposure to misogynistic music increases sexist views. They also suggest that some observed gender differences in these responses are the result of different kinds of information processing. Women, as the targets of these lyrics, will process misogynistic lyrics centrally and will attempt to understand the information they are receiving more thoroughly. Thus, they are more likely to reject the lyrics. This finding highlights the importance of attention and how the lyrics are received and the impact these factors can have on listeners’ reactions. These theories were supported in their study as participants exposed to misogynistic music demonstrated few differences from the control group, in which participants were not exposed to any music, in levels of hostile and benevolent sexism (Cobb & Boettcher, 2007). However, exposure to nonmisogynistic rap resulted in significantly increased levels of hostile and benevolent sexism. Researchers suggested that this may be because the processing of misogynistic lyrics meant that listeners were aware of the sexism present in the lyrics and thus the music was unable to prime their latent sexism. However, we live in a society in which rap music is associated with misogyny and violence (Fried, 1999). When participants listened to nonmisogynistic lyrics this association was primed. Because the lyrics weren’t explicit the processing involved was not critical and these assumptions went unchallenged and latent sexism was primed. Objectification theory provides another hypothesis for the processing and potential effects of media. Objectification theory posits that in a society in which women are frequently objectified, that is, seen as bodies that perform tasks rather than as people, women begin to selfEFFECTS OF OBJECTIFYING HIP HOP 6 objectify, or see themselves as objects for others’ viewing (Fredrickson & Roberts, 1997). They internalize an outsider’s perspective of their body. This self-objectification comes with anxiety and shame as well as frequent appearance monitoring (Fredrickson & Roberts, 1997). The authors suggest that the frequent objectification and self-objectification that occurs in our society could contribute to depression and eating disorders. They also suggest that frequent selfmonitoring, shame, and anxiety could make it difficult to reach and maintain peak motivational states (that is, an extended period of time in which we are voluntarily absorbed in a challenging physical or mental task with the goal of accomplishing something that’s considered worthwhile). These states are psychologically beneficial. Multiple studies support this theory. One such study looked at the effects of being in a self-objectifying state on the ability to reach and maintain a peak motivational state (Fredrickson, Roberts, Noll, Quinn, & Twenge, 1998). Participants were asked to try on either a swimsuit or a sweater and spend some time in that article of clothing. After this time they were asked questions about their self-objectifying behaviors and attitudes, such as depressed mood, self-surveillance, and body shame. They were then asked to complete a difficult math task, an activity meant to produce a peak motivational state. A similar study was completed with members of different ethnic groups (Hebl, King, & Lin, 2004). In this study a nearly identical procedure was followed. In addition, researchers aimed to create a more objectifying state for men, having them wear Speedos rather than swim trunks. In both of these studies female participants wearing swimsuits performed significantly worse on the math test than female participants wearing sweaters. There were no significant differences between the swim trunks and sweater conditions for male participants. However, when male participants wore Speedos they performed significantly worse on the math test. Further, the results of measures of self-objectifying EFFECTS OF OBJECTIFYING HIP HOP 7 behaviors, like body shame and surveillance, were significantly higher for those in the swimsuit condition. These findings demonstrate support for objectification theory and suggest that it crosses ethnic boundaries. The decreased math scores for men in Speedos suggest that it is possible to put anyone in a self-objectifying state. However, it is women who most often find themselves in this situation in our society. With empirical support for the central premises of objectification theory, research has turned to effects of popular media on self-objectification of women. One such study looked at the links between music video consumption, self-surveillance, body esteem, dieting status, depressive symptoms, and math confidence (Grabe & Hyde, 2009). Researchers found a positive relationship between music video consumption, self-objectification, and the host of psychological factors proposed by Fredrickson and Roberts, such that as music video consumption increased, so did self-objectifying behaviors. Another study looked at the effects of portrayals of the thin ideal in m",
"title": ""
},
{
"docid": "41a54cd203b0964a6c3d9c2b3addff46",
"text": "Increasing occupancy rates and revenue by improving customer experience is the aim of modern hospitality organizations. To achieve these results, hotel managers need to have a deep knowledge of customers’ needs, behavior, and preferences and be aware of the ways in which the services delivered create value for the customers and then stimulate their retention and loyalty. In this article a methodological framework to analyze the guest–hotel relationship and to profile hotel guests is discussed, focusing on the process of designing a customer information system and particularly the guest information matrix on which the system database will be built.",
"title": ""
},
{
"docid": "b333be40febd422eae4ae0b84b8b9491",
"text": "BACKGROUND\nRarely, basal cell carcinomas (BCCs) have the potential to become extensively invasive and destructive, a phenomenon that has led to the term \"locally advanced BCC\" (laBCC). We identified and described the diverse settings that could be considered \"locally advanced\".\n\n\nMETHODS\nThe panel of experts included oncodermatologists, dermatological and maxillofacial surgeons, pathologists, radiotherapists and geriatricians. During a 1-day workshop session, an interactive flow/sequence of questions and inputs was debated.\n\n\nRESULTS\nDiscussion of nine cases permitted us to approach consensus concerning what constitutes laBCC. The expert panel retained three major components for the complete assessment of laBCC cases: factors of complexity related to the tumour itself, factors related to the operability and the technical procedure, and factors related to the patient. Competing risks of death should be precisely identified. To ensure homogeneous multidisciplinary team (MDT) decisions in different clinical settings, the panel aimed to develop a practical tool based on the three components.\n\n\nCONCLUSION\nThe grid presented is not a definitive tool, but rather, it is a method for analysing the complexity of laBCC.",
"title": ""
},
{
"docid": "b0d11ab83aa6ae18d1a2be7c8e8803b5",
"text": "Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.",
"title": ""
},
{
"docid": "508ce0c5126540ad7f46b8f375c50df8",
"text": "Sex differences in children’s toy preferences are thought by many to arise from gender socialization. However, evidence from patients with endocrine disorders suggests that biological factors during early development (e.g., levels of androgens) are influential. In this study, we found that vervet monkeys (Cercopithecus aethiops sabaeus) show sex differences in toy preferences similar to those documented previously in children. The percent of contact time with toys typically preferred by boys (a car and a ball) was greater in male vervets (n = 33) than in female vervets (n = 30) (P < .05), whereas the percent of contact time with toys typically preferred by girls (a doll and a pot) was greater in female vervets than in male vervets (P < .01). In contrast, contact time with toys preferred equally by boys and girls (a picture book and a stuffed dog) was comparable in male and female vervets. The results suggest that sexually differentiated object preferences arose early in human evolution, prior to the emergence of a distinct hominid lineage. This implies that sexually dimorphic preferences for features (e.g., color, shape, movement) may have evolved from differential selection pressures based on the different behavioral roles of males and females, and that evolved object feature preferences may contribute to present day sexually dimorphic toy preferences in children. D 2002 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "8405f30ca5f4bd671b056e9ca1f4d8df",
"text": "The remarkable manipulative skill of the human hand is not the result of rapid sensorimotor processes, nor of fast or powerful effector mechanisms. Rather, the secret lies in the way manual tasks are organized and controlled by the nervous system. At the heart of this organization is prediction. Successful manipulation requires the ability both to predict the motor commands required to grasp, lift, and move objects and to predict the sensory events that arise as a consequence of these commands.",
"title": ""
},
{
"docid": "913777c94a55329ddf42955900a51096",
"text": "In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal.",
"title": ""
},
{
"docid": "659deeead04953483a3ed6c5cc78cd76",
"text": "We describe ParsCit, a freely available, open-source imple entation of a reference string parsing package. At the core of ParsCit is a trained conditional random field (CRF) model used to label th token sequences in the reference string. A heuristic model wraps this core with added functionality to identify reference string s from a plain text file, and to retrieve the citation contexts . The package comes with utilities to run it as a web service or as a standalone uti lity. We compare ParsCit on three distinct reference string datasets and show that it compares well with other previously published work.",
"title": ""
},
{
"docid": "6f410e93fa7ab9e9c4a7a5710fea88e2",
"text": "We propose a fast, scalable locality-sensitive hashing method for the problem of retrieving similar physiological waveform time series. When compared to the naive k-nearest neighbor search, the method vastly speeds up the retrieval time of similar physiological waveforms without sacrificing significant accuracy. Our result shows that we can achieve 95% retrieval accuracy or better with up to an order of magnitude of speed-up. The extra time required in advance to create the optimal data structure is recovered when query quantity equals 15% of the repository, while the method incurs a trivial additional memory cost. We demonstrate the effectiveness of this method on an arterial blood pressure time series dataset extracted from the ICU physiological waveform repository of the MIMIC-II database.",
"title": ""
},
{
"docid": "fe77a632bae11d9333cd867960e47375",
"text": "Here we present a projection augmented reality (AR) based assistive robot, which we call the Pervasive Assistive Robot System (PARS). The PARS aims to improve the quality of life by of the elderly and less able-bodied. In particular, the proposed system will support dynamic display and monitoring systems, which will be helpful for older adults who have difficulty moving their limbs and who have a weak memory.We attempted to verify the usefulness of the PARS using various scenarios. We expected that PARSs will be used as assistive robots for people who experience physical discomfort in their daily lives.",
"title": ""
},
{
"docid": "97af4f8e35a7d773bb85969dd027800b",
"text": "For an intelligent transportation system (ITS), traffic incident detection is one of the most important issues, especially for urban area which is full of signaled intersections. In this paper, we propose a novel traffic incident detection method based on the image signal processing and hidden Markov model (HMM) classifier. First, a traffic surveillance system was set up at a typical intersection of china, traffic videos were recorded and image sequences were extracted for image database forming. Second, compressed features were generated through several image processing steps, image difference with FFT was used to improve the recognition rate. Finally, HMM was used for classification of traffic signal logics (East-West, West-East, South-North, North-South) and accident of crash, the total correct rate is 74% and incident recognition rate is 84%. We believe, with more types of incident adding to the database, our detection algorithm could serve well for the traffic surveillance system.",
"title": ""
}
] | scidocsrr |
9751bcc37c86fa0f0834e3c7a3ce1381 | Robust Capped Norm Nonnegative Matrix Factorization: Capped Norm NMF | [
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
}
] | [
{
"docid": "0dd78cb46f6d2ddc475fd887a0dc687c",
"text": "Predicting items a user would like on the basis of other users’ ratings for these items has become a well-established strategy adopted by many recommendation services on the Internet. Although this can be seen as a classification problem, algorithms proposed thus far do not draw on results from the machine learning literature. We propose a representation for collaborative filtering tasks that allows the application of virtually any machine learning algorithm. We identify the shortcomings of current collaborative filtering techniques and propose the use of learning algorithms paired with feature extraction techniques that specifically address the limitations of previous approaches. Our best-performing algorithm is based on the singular value decomposition of an initial matrix of user ratings, exploiting latent structure that essentially eliminates the need for users to rate common items in order to become predictors for one another's preferences. We evaluate the proposed algorithm on a large database of user ratings for motion pictures and find that our approach significantly outperforms current collaborative filtering algorithms.",
"title": ""
},
{
"docid": "299deaffdd1a494fc754b9e940ad7f81",
"text": "In this work, we study an important problem: learning programs from input-output examples. We propose a novel method to learn a neural program operating a domain-specific non-differentiable machine, and demonstrate that this method can be applied to learn programs that are significantly more complex than the ones synthesized before: programming language parsers from input-output pairs without knowing the underlying grammar. The main challenge is to train the neural program without supervision on execution traces. To tackle it, we propose: (1) LL machines and neural programs operating them to effectively regularize the space of the learned programs; and (2) a two-phase reinforcement learning-based search technique to train the model. Our evaluation demonstrates that our approach can successfully learn to parse programs in both an imperative language and a functional language, and achieve 100% test accuracy, while existing approaches’ accuracies are almost 0%. This is the first successful demonstration of applying reinforcement learning to train a neural program operating a non-differentiable machine that can fully generalize to test sets on a non-trivial task.",
"title": ""
},
{
"docid": "f58d69de4b5bcc4100a3bfe3426fa81f",
"text": "This study evaluated the factor structure of the Rosenberg Self-Esteem Scale (RSES) with a diverse sample of 1,248 European American, Latino, Armenian, and Iranian adolescents. Adolescents completed the 10-item RSES during school as part of a larger study on parental influences and academic outcomes. Findings suggested that method effects in the RSES are more strongly associated with negatively worded items across three diverse groups but also more pronounced among ethnic minority adolescents. Findings also suggested that accounting for method effects is necessary to avoid biased conclusions regarding cultural differences in selfesteem and how predictors are related to the RSES. Moreover, the two RSES factors (positive self-esteem and negative self-esteem) were differentially predicted by parenting behaviors and academic motivation. Substantive and methodological implications of these findings for crosscultural research on adolescent self-esteem are discussed.",
"title": ""
},
{
"docid": "f2a9d15d9b38738d563f9d9f9fa5d245",
"text": "Mobile cellular networks have become both the generators and carriers of massive data. Big data analytics can improve the performance of mobile cellular networks and maximize the revenue of operators. In this paper, we introduce a unified data model based on the random matrix theory and machine learning. Then, we present an architectural framework for applying the big data analytics in the mobile cellular networks. Moreover, we describe several illustrative examples, including big signaling data, big traffic data, big location data, big radio waveforms data, and big heterogeneous data, in mobile cellular networks. Finally, we discuss a number of open research challenges of the big data analytics in the mobile cellular networks.",
"title": ""
},
{
"docid": "232eabfb63f0b908ef3a64d0731ba358",
"text": "This paper reviews the potential of spin-transfer torque devices as an alternative to complementary metal-oxide-semiconductor for non-von Neumann and non-Boolean computing. Recent experiments on spin-transfer torque devices have demonstrated high-speed magnetization switching of nanoscale magnets with small current densities. Coupled with other properties, such as nonvolatility, zero leakage current, high integration density, we discuss that the spin-transfer torque devices can be inherently suitable for some unconventional computing models for information processing. We review several spintronic devices in which magnetization can be manipulated by current induced spin transfer torque and explore their applications in neuromorphic computing and reconfigurable memory-based computing.",
"title": ""
},
{
"docid": "dc6aafe2325dfdea5e758a30c90d8940",
"text": "When a query is submitted to a search engine, the search engine returns a dynamically generated result page containing the result records, each of which usually consists of a link to and/or snippet of a retrieved Web page. In addition, such a result page often also contains information irrelevant to the query, such as information related to the hosting site of the search engine and advertisements. In this paper, we present a technique for automatically producing wrappers that can be used to extract search result records from dynamically generated result pages returned by search engines. Automatic search result record extraction is very important for many applications that need to interact with search engines such as automatic construction and maintenance of metasearch engines and deep Web crawling. The novel aspect of the proposed technique is that it utilizes both the visual content features on the result page as displayed on a browser and the HTML tag structures of the HTML source file of the result page. Experimental results indicate that this technique can achieve very high extraction accuracy.",
"title": ""
},
{
"docid": "7b1dad9f2e8a2a454fe01bab4cca47a3",
"text": "We describe a method to train spiking deep networks that can be run using leaky integrate-and-fire (LIF) neurons, achieving state-of-the-art results for spiking LIF networks on five datasets, including the large ImageNet ILSVRC-2012 benchmark. Our method for transforming deep artificial neural networks into spiking networks is scalable and works with a wide range of neural nonlinearities. We achieve these results by softening the neural response function, such that its derivative remains bounded, and by training the network with noise to provide robustness against the variability introduced by spikes. Our analysis shows that implementations of these networks on neuromorphic hardware will be many times more power-efficient than the equivalent non-spiking networks on traditional hardware.",
"title": ""
},
{
"docid": "ecd541de66690a9f2aa5341646a63742",
"text": "The purpose is to determine whether use of perioperative antibiotics for more than 24 h decreases the incidence of SSI in neonates and infants. We studied neonates and infants who had clean–contaminated or contaminated gastrointestinal operations from 1996 to 2006. Patient- and operation-related variables, duration of perioperative antibiotics, and SSI within 30 days were ascertained by retrospective chart review. In assessing the effects of antibiotic duration, we controlled for confounding by indication using standard covariate adjustment and propensity score matching. Among 732 operations, the incidence of SSI was 13 %. Using propensity score matching, the odds of SSI were similar (OR 1.1, 95 % CI 0.6–1.9) in patients who received ≤24 h of postoperative antibiotics compared to >24 h. No difference was also found in standard covariate adjustment. This multivariate model identified three independent predictors of SSI: preoperative infection (OR 3.9, 95 % CI 1.4–10.9) and re-operation through the same incision, both within 30 days (OR 3.5, 95 % CI 1.7–7.4) and later (OR 2.3, 95 % CI 1.4–3.8). In clean–contaminated and contaminated gastrointestinal operations, giving >24 h of postoperative antibiotics offered no protection against SSI. An adequately powered randomized clinical trial is needed to conclusively evaluate longer duration antibiotic prophylaxis.",
"title": ""
},
{
"docid": "382eec3778d98cb0c8445633c16f59ef",
"text": "In the face of acute global competition, supplier management is rapidly emerging as a crucial issue to any companies striving for business success and sustainable development. To optimise competitive advantages, a company should incorporate ‘suppliers’ as an essential part of its core competencies. Supplier evaluation, the first step in supplier management, is a complex multiple criteria decision making (MCDM) problem, and its complexity is further aggravated if the highly important interdependence among the selection criteria is taken into consideration. The objective of this paper is to suggest a comprehensive decision method for identifying top suppliers by considering the effects of interdependence among the selection criteria. Proposed in this study is a hybrid model, which incorporates the technique of analytic network process (ANP) in which criteria weights are determined using fuzzy extent analysis, Technique for order performance by similarity to ideal solution (TOPSIS) under fuzzy environment is adopted to rank competing suppliers in terms of their overall performances. An example is solved to illustrate the effectiveness and feasibility of the suggested model.",
"title": ""
},
{
"docid": "bf8f46e4c85f7e45879cee4282444f78",
"text": "Influence of culture conditions such as light, temperature and C/N ratio was studied on growth of Haematococcus pluvialis and astaxanthin production. Light had significant effect on astaxanthin production and it varied with its intensity and direction of illumination and effective culture ratio (ECR, volume of culture medium/volume of flask). A 6-fold increase in astaxanthin production (37 mg/L) was achieved with 5.1468·107 erg·m−2·s−1 light intensity (high light, HL) at effective culture ratio of 0.13 compared to that at 0.52 ECR, while the difference in the astaxanthin production was less than 2 — fold between the effective culture ratios at 1.6175·107 erg·m−2·s−1 light intensity (low light, LL). Multidirectional (three-directional) light illumination considerably enhanced the astaxanthin production (4-fold) compared to unidirectional illumination. Cell count was high at low temperature (25 °C) while astaxanthin content was high at 35 °C in both autotrophic and heterotrophic media. In a heterotrophic medium at low C/N ratio H. pluvialis growth was higher with prolonged vegetative phase, while high C/N ratio favoured early encystment and higher astaxanthin formation.",
"title": ""
},
{
"docid": "1b5a28c875cf49eadac7032d3dd6398f",
"text": "This paper proposes a new approach to style, arising from our work on computational media using structural blending, which enriches the conceptual blending of cognitive linguistics with structure building operations in order to encompass syntax and narrative as well as metaphor. We have implemented both conceptual and structural blending, and conducted initial experiments with poetry, although the approach generalizes to other media. The central idea is to analyze style in terms of principles for blending, based on our £nding that very different principles from those of common sense blending are needed for some creative works.",
"title": ""
},
{
"docid": "77796f30d8d1604c459fb3f3fe841515",
"text": "The overall focus of this research is to demonstrate the savings potential generated by the integration of the design of strategic global supply chain networks with the determination of tactical production–distribution allocations and transfer prices. The logistics systems design problem is defined as follows: given a set of potential suppliers, potential manufacturing facilities, and distribution centers with multiple possible configurations, and customers with deterministic demands, determine the configuration of the production–distribution system and the transfer prices between various subsidiaries of the corporation such that seasonal customer demands and service requirements are met and the after tax profit of the corporation is maximized. The after tax profit is the difference between the sales revenue minus the total system cost and taxes. The total cost is defined as the sum of supply, production, transportation, inventory, and facility costs. Two models and their associated solution algorithms will be introduced. The savings opportunities created by designing the system with a methodology that integrates strategic and tactical decisions rather than in a hierarchical fashion are demonstrated with two case studies. The first model focuses on the setting of transfer prices in a global supply chain with the objective of maximizing the after tax profit of an international corporation. The constraints mandated by the national taxing authorities create a bilinear programming formulation. We will describe a very efficient heuristic iterative solution algorithm, which alternates between the optimization of the transfer prices and the material flows. Performance and bounds for the heuristic algorithms will be discussed. The second model focuses on the production and distribution allocation in a single country system, when the customers have seasonal demands. This model also needs to be solved as a subproblem in the heuristic solution of the global transfer price model. The research develops an integrated design methodology based on primal decomposition methods for the mixed integer programming formulation. The primal decomposition allows a natural split of the production and transportation decisions and the research identifies the necessary information flows between the subsystems. The primal decomposition method also allows a very efficient solution algorithm for this general class of large mixed integer programming models. Data requirements and solution times will be discussed for a real life case study in the packaging industry. 2002 Elsevier Science B.V. All rights reserved. European Journal of Operational Research 143 (2002) 1–18 www.elsevier.com/locate/dsw * Corresponding author. Tel.: +1-404-894-2317; fax: +1-404-894-2301. E-mail address: marc.goetschalckx@isye.gatech.edu (M. Goetschalckx). 0377-2217/02/$ see front matter 2002 Elsevier Science B.V. All rights reserved. PII: S0377-2217 (02 )00142-X",
"title": ""
},
{
"docid": "294d29b68d67d5be0d9fb88dd6329e34",
"text": "A semi-recurrent hybrid VAE-GAN model for generating sequential data is introduced. In order to consider the spatial correlation of the data in each frame of the generated sequence, CNNs are utilized in the encoder, generator, and discriminator. The subsequent frames are sampled from the latent distributions obtained by encoding the previous frames. As a result, the dependencies between the frames are maintained. Two testing frameworks for synthesizing a sequence with any number of frames are also proposed. The promising experimental results on piano music generation indicates the potential of the proposed framework in modelling other sequential data such as video.",
"title": ""
},
{
"docid": "b12049aac966497b17e075c2467151dd",
"text": "IV HLA-G and HLA-E alleles and RPL HLA-G and HLA-E gene polymorphism in patients with Idiopathic Recurrent Pregnancy Loss in Gaza strip",
"title": ""
},
{
"docid": "70a534183750abab91aa74710027a092",
"text": "We consider whether sentiment affects the profitability of momentum strategies. We hypothesize that news that contradicts investors’ sentiment causes cognitive dissonance, slowing the diffusion of such news. Thus, losers (winners) become underpriced under optimism (pessimism). Shortselling constraints may impede arbitraging of losers and thus strengthen momentum during optimistic periods. Supporting this notion, we empirically show that momentum profits arise only under optimism. An analysis of net order flows from small and large trades indicates that small investors are slow to sell losers during optimistic periods. Momentum-based hedge portfolios formed during optimistic periods experience long-run reversals. JFQ_481_2013Feb_Antoniou-Doukas-Subrahmanyam_ms11219_SH_FB_0122_DraftToAuthors.pdf",
"title": ""
},
{
"docid": "fb1c4605eb6663fdd04e9ac1579e7ff0",
"text": "We present an enhanced autonomous indoor navigation system for a stock quadcopter drone where all navigation commands are derived off-board on a base station. The base station processes the video stream transmitted from a forward-facing camera on the drone to determine the drone's physical disposition and trajectory in building hallways to derive steering commands that are sent to the drone. Off-board processing and the lack of on-board sensors for localizing the drone permits standard mid-range quadcopters to be used and conserves the limited power source on the quadcopter. We introduce improved and new techniques, compared to our prototype system [1], to maintain stable flights, estimate distance to hallway intersections and describe algorithms to stop the drone ahead of time and turn correctly at intersections.",
"title": ""
},
{
"docid": "a18da0c7d655fee44eebdf61c7371022",
"text": "This paper describes and compares a set of no-reference quality assessment algorithms for H.264/AVC encoded video sequences. These algorithms have in common a module that estimates the error due to lossy encoding of the video signals, using only information available on the compressed bitstream. In order to obtain perceived quality scores from the estimated error, three methods are presented: i) to weight the error estimates according to a perceptual model; ii) to linearly combine the mean squared error (MSE) estimates with additional video features; iii) to use MSE estimates as the input of a logistic function. The performances of the algorithms are evaluated using cross-validation procedures and the one showing the best performance is also in a preliminary study of quality assessment in the presence of transmission losses.",
"title": ""
},
{
"docid": "8734436dbd821d7a1bb0d2de97ba44d3",
"text": "What makes a face attractive and why do we have the preferences we do? Emergence of preferences early in development and cross-cultural agreement on attractiveness challenge a long-held view that our preferences reflect arbitrary standards of beauty set by cultures. Averageness, symmetry, and sexual dimorphism are good candidates for biologically based standards of beauty. A critical review and meta-analyses indicate that all three are attractive in both male and female faces and across cultures. Theorists have proposed that face preferences may be adaptations for mate choice because attractive traits signal important aspects of mate quality, such as health. Others have argued that they may simply be by-products of the way brains process information. Although often presented as alternatives, I argue that both kinds of selection pressures may have shaped our perceptions of facial beauty.",
"title": ""
},
{
"docid": "b02ebfa85f0948295b401152c0190d74",
"text": "SAGE has had a remarkable impact at Microsoft.",
"title": ""
}
] | scidocsrr |
20c49ce8a94be9f93d4a86ed7e1f84b6 | Context-Aware Correlation Filter Tracking | [
{
"docid": "d349cf385434027b4532080819d5745f",
"text": "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.",
"title": ""
},
{
"docid": "aee250663a05106c4c0fad9d0f72828c",
"text": "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.",
"title": ""
}
] | [
{
"docid": "49736d49ee7b777523064efcd99c5cbb",
"text": "Immune checkpoint antagonists (CTLA-4 and PD-1/PD-L1) and CAR T-cell therapies generate unparalleled durable responses in several cancers and have firmly established immunotherapy as a new pillar of cancer therapy. To extend the impact of immunotherapy to more patients and a broader range of cancers, targeting additional mechanisms of tumor immune evasion will be critical. Adenosine signaling has emerged as a key metabolic pathway that regulates tumor immunity. Adenosine is an immunosuppressive metabolite produced at high levels within the tumor microenvironment. Hypoxia, high cell turnover, and expression of CD39 and CD73 are important factors in adenosine production. Adenosine signaling through the A2a receptor expressed on immune cells potently dampens immune responses in inflamed tissues. In this article, we will describe the role of adenosine signaling in regulating tumor immunity, highlighting potential therapeutic targets in the pathway. We will also review preclinical data for each target and provide an update of current clinical activity within the field. Together, current data suggest that rational combination immunotherapy strategies that incorporate inhibitors of the hypoxia-CD39-CD73-A2aR pathway have great promise for further improving clinical outcomes in cancer patients.",
"title": ""
},
{
"docid": "721ff703dfafad6b1b330226c36ed641",
"text": "In the Narrowband Internet-of-Things (NB-IoT) LTE systems, the device shall be able to blindly lock to a cell within 200-KHz bandwidth and with only one receive antenna. In addition, the device is required to setup a call at a signal-to-noise ratio (SNR) of −12.6 dB in the extended coverage mode. A new set of synchronization signals have been introduced to provide data-aided synchronization and cell search. In this letter, we present a procedure for NB-IoT cell search and initial synchronization subject to the new challenges given the new specifications. Simulation results show that this method not only provides the required performance at very low SNRs, but also can be quickly camped on a cell, if any.",
"title": ""
},
{
"docid": "6420f394cb02e9415b574720a9c64e7f",
"text": "Interleaved power converter topologies have received increasing attention in recent years for high power and high performance applications. The advantages of interleaved boost converters include increased efficiency, reduced size, reduced electromagnetic emission, faster transient response, and improved reliability. The front end inductors in an interleaved boost converter are magnetically coupled to improve electrical performance and reduce size and weight. Compared to a direct coupled configuration, inverse coupling provides the advantages of lower inductor ripple current and negligible dc flux levels in the core. In this paper, we explore the possible advantages of core geometry on core losses and converter efficiency. Analysis of FEA simulation and empirical characterization data indicates a potential superiority of a square core, with symmetric 45deg energy storage corner gaps, for providing both ac flux balance and maximum dc flux cancellation when wound in an inverse coupled configuration.",
"title": ""
},
{
"docid": "9a2d79d9df9e596e26f8481697833041",
"text": "Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.",
"title": ""
},
{
"docid": "9ed5fdb991edd5de57ffa7f13121f047",
"text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 5 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.",
"title": ""
},
{
"docid": "8c0588538b1b04193e80ef5ce5ad55a7",
"text": "Unlike traditional bipolar constrained liners, the Osteonics Omnifit constrained acetabular insert is a tripolar device, consisting of an inner bipolar bearing articulating within an outer, true liner. Every reported failure of the Omnifit tripolar implant has been by failure at the shell-bone interface (Type I failure), failure at the shell-liner interface (Type II failure), or failure of the locking mechanism resulting in dislocation of the bipolar-liner interface (Type III failure). In this report we present two cases of failure of the Omnifit tripolar at the bipolar-femoral head interface. To our knowledge, these are the first reported cases of failure at the bipolar-femoral head interface (Type IV failure). In addition, we described the first successful closed reduction of a Type IV failure.",
"title": ""
},
{
"docid": "536c739e6f0690580568a242e1d65ef3",
"text": "Intrusion Detection Systems (IDS) are key components for securing critical infrastructures, capable of detecting malicious activities on networks or hosts. However, the efficiency of an IDS depends primarily on both its configuration and its precision. The large amount of network traffic that needs to be analyzed, in addition to the increase in attacks’ sophistication, renders the optimization of intrusion detection an important requirement for infrastructure security, and a very active research subject. In the state of the art, a number of approaches have been proposed to improve the efficiency of intrusion detection and response systems. In this article, we review the works relying on decision-making techniques focused on game theory and Markov decision processes to analyze the interactions between the attacker and the defender, and classify them according to the type of the optimization problem they address. While these works provide valuable insights for decision-making, we discuss the limitations of these solutions as a whole, in particular regarding the hypotheses in the models and the validation methods. We also propose future research directions to improve the integration of game-theoretic approaches into IDS optimization techniques.",
"title": ""
},
{
"docid": "048cc782baeec3a7f46ef5ee7abf0219",
"text": "Autoerotic asphyxiation is an unusual but increasingly more frequently occurring phenomenon, with >1000 fatalities in the United States per year. Understanding of this manner of death is likewise increasing, as noted by the growing number of cases reported in the literature. However, this form of accidental death is much less frequently seen in females (male:female ratio >50:1), and there is correspondingly less literature on female victims of autoerotic asphyxiation. The authors present the case of a 31-year-old woman who died of an autoerotic ligature strangulation and review the current literature on the subject. The forensic examiner must be able to discern this syndrome from similar forms of accidental and suicidal death, and from homicidal hanging/strangulation.",
"title": ""
},
{
"docid": "a2f36e0f8abaa07124d446f6aa870491",
"text": "We explore the capabilities of Auto-Encoders to fuse the information available from cameras and depth sensors, and to reconstruct missing data, for scene understanding tasks. In particular we consider three input modalities: RGB images; depth images; and semantic label information. We seek to generate complete scene segmentations and depth maps, given images and partial and/or noisy depth and semantic data. We formulate this objective of reconstructing one or more types of scene data using a Multi-modal stacked Auto-Encoder. We show that suitably designed Multi-modal Auto-Encoders can solve the depth estimation and the semantic segmentation problems simultaneously, in the partial or even complete absence of some of the input modalities. We demonstrate our method using the outdoor dataset KITTI that includes LIDAR and stereo cameras. Our results show that as a means to estimate depth from a single image, our method is comparable to the state-of-the-art, and can run in real time (i.e., less than 40ms per frame). But we also show that our method has a significant advantage over other methods in that it can seamlessly use additional data that may be available, such as a sparse point-cloud and/or incomplete coarse semantic labels.",
"title": ""
},
{
"docid": "aa30fc0f921509b1f978aeda1140ffc0",
"text": "Arithmetic coding provides an e ective mechanism for removing redundancy in the encoding of data. We show how arithmetic coding works and describe an e cient implementation that uses table lookup as a fast alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible e ect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding.",
"title": ""
},
{
"docid": "d7eb92756c8c3fb0ab49d7b101d96343",
"text": "Pretraining with language modeling and related unsupervised tasks has recently been shown to be a very effective enabling technology for the development of neural network models for language understanding tasks. In this work, we show that although language model-style pretraining is extremely effective at teaching models about language, it does not yield an ideal starting point for efficient transfer learning. By supplementing language model-style pretraining with further training on data-rich supervised tasks, we are able to achieve substantial additional performance improvements across the nine target tasks in the GLUE benchmark. We obtain an overall score of 76.9 on GLUE—a 2.3 point improvement over our baseline system adapted from Radford et al. (2018) and a 4.1 point improvement over Radford et al.’s reported score. We further use training data downsampling to show that the benefits of this supplementary training are even more pronounced in data-constrained regimes.",
"title": ""
},
{
"docid": "74ff09a1d3ca87a0934a1b9095c282c4",
"text": "The cancer metastasis suppressor protein KAI1/CD82 is a member of the tetraspanin superfamily. Recent studies have demonstrated that tetraspanins are palmitoylated and that palmitoylation contributes to the organization of tetraspanin webs or tetraspanin-enriched microdomains. However, the effect of palmitoylation on tetraspanin-mediated cellular functions remains obscure. In this study, we found that tetraspanin KAI1/CD82 was palmitoylated when expressed in PC3 metastatic prostate cancer cells and that palmitoylation involved all of the cytoplasmic cysteine residues proximal to the plasma membrane. Notably, the palmitoylation-deficient KAI1/CD82 mutant largely reversed the wild-type KAI1/CD82's inhibitory effects on migration and invasion of PC3 cells. Also, palmitoylation regulates the subcellular distribution of KAI1/CD82 and its association with other tetraspanins, suggesting that the localized interaction of KAI1/CD82 with tetraspanin webs or tetraspanin-enriched microdomains is important for KAI1/CD82's motility-inhibitory activity. Moreover, we found that KAI1/CD82 palmitoylation affected motility-related subcellular events such as lamellipodia formation and actin cytoskeleton organization and that the alteration of these processes likely contributes to KAI1/CD82's inhibition of motility. Finally, the reversal of cell motility seen in the palmitoylation-deficient KAI1/CD82 mutant correlates with regaining of p130(CAS)-CrkII coupling, a signaling step important for KAI1/CD82's activity. Taken together, our results indicate that palmitoylation is crucial for the functional integrity of tetraspanin KAI1/CD82 during the suppression of cancer cell migration and invasion.",
"title": ""
},
{
"docid": "136a2f401b3af00f0f79b991ab65658f",
"text": "Usage of online social business networks like LinkedIn and XING have become commonplace in today’s workplace. This research addresses the question of what factors drive the intention to use online social business networks. Theoretical frame of the study is the Technology Acceptance Model (TAM) and its extensions, most importantly the TAM2 model. Data has been collected via a Web Survey among users of LinkedIn and XING from January to April 2010. Of 541 initial responders 321 finished the questionnaire. Operationalization was tested using confirmatory factor analyses and causal hypotheses were evaluated by means of structural equation modeling. Core result is that the TAM2 model generally holds in the case of online social business network usage behavior, explaining 73% of the observed usage intention. This intention is most importantly driven by perceived usefulness, attitude towards usage and social norm, with the latter effecting both directly and indirectly over perceived usefulness. However, perceived ease of use has—contrary to hypothesis—no direct effect on the attitude towards usage of online social business networks. Social norm has a strong indirect influence via perceived usefulness on attitude and intention, creating a network effect for peer users. The results of this research provide implications for online social business network design and marketing. Customers seem to evaluate ease of use as an integral part of the usefulness of such a service which leads to a situation where it cannot be dealt with separately by a service provider. Furthermore, the strong direct impact of social norm implies application of viral and peerto-peer marketing techniques while it’s also strong indirect effect implies the presence of a network effect which stabilizes the ecosystem of online social business service vendors.",
"title": ""
},
{
"docid": "10423f367850761fd17cf1b146361f34",
"text": "OBJECTIVE\nDetection and characterization of microcalcification clusters in mammograms is vital in daily clinical practice. The scope of this work is to present a novel computer-based automated method for the characterization of microcalcification clusters in digitized mammograms.\n\n\nMETHODS AND MATERIAL\nThe proposed method has been implemented in three stages: (a) the cluster detection stage to identify clusters of microcalcifications, (b) the feature extraction stage to compute the important features of each cluster and (c) the classification stage, which provides with the final characterization. In the classification stage, a rule-based system, an artificial neural network (ANN) and a support vector machine (SVM) have been implemented and evaluated using receiver operating characteristic (ROC) analysis. The proposed method was evaluated using the Nijmegen and Mammographic Image Analysis Society (MIAS) mammographic databases. The original feature set was enhanced by the addition of four rule-based features.\n\n\nRESULTS AND CONCLUSIONS\nIn the case of Nijmegen dataset, the performance of the SVM was Az=0.79 and 0.77 for the original and enhanced feature set, respectively, while for the MIAS dataset the corresponding characterization scores were Az=0.81 and 0.80. Utilizing neural network classification methodology, the corresponding performance for the Nijmegen dataset was Az=0.70 and 0.76 while for the MIAS dataset it was Az=0.73 and 0.78. Although the obtained high classification performance can be successfully applied to microcalcification clusters characterization, further studies must be carried out for the clinical evaluation of the system using larger datasets. The use of additional features originating either from the image itself (such as cluster location and orientation) or from the patient data may further improve the diagnostic value of the system.",
"title": ""
},
{
"docid": "813a0d47405d133263deba0da6da27a8",
"text": "The demands on dielectric material measurements have increased over the years as electrical components have been miniaturized and device frequency bands have increased. Well-characterized dielectric measurements on thin materials are needed for circuit design, minimization of crosstalk, and characterization of signal-propagation speed. Bulk material applications have also increased. For accurate dielectric measurements, each measurement band and material geometry requires specific fixtures. Engineers and researchers must carefully match their material system and uncertainty requirements to the best available measurement system. Broadband measurements require transmission-line methods, and accurate measurements on low-loss materials are performed in resonators. The development of the most accurate methods for each application requires accurate fixture selection in terms of field geometry, accurate field models, and precise measurement apparatus.",
"title": ""
},
{
"docid": "e59b203f3b104553a84603240ea467eb",
"text": "Experimental art deployed in the Augmented Reality (AR) medium is contributing to a reconfiguration of traditional perceptions of interface, audience participation, and perceptual experience. Artists, critical engineers, and programmers, have developed AR in an experimental topology that diverges from both industrial and commercial uses of the medium. In a general technical sense, AR is considered as primarily an information overlay, a datafied window that situates virtual information in the physical world. In contradistinction, AR as experimental art practice activates critical inquiry, collective participation, and multimodal perception. As an emergent hybrid form that challenges and extends already established 'fine art' categories, augmented reality art deployed on Portable Media Devices (PMD’s) such as tablets & smartphones fundamentally eschews models found in the conventional 'art world.' It should not, however, be considered as inscribing a new 'model:' rather, this paper posits that the unique hybrids advanced by mobile augmented reality art–– also known as AR(t)–– are closely related to the notion of the 'machinic assemblage' ( Deleuze & Guattari 1987), where a deep capacity to re-assemble marks each new artevent. This paper develops a new formulation, the 'software assemblage,’ to explore some of the unique mixed reality situations that AR(t) has set in motion.",
"title": ""
},
{
"docid": "06c3f32f07418575c700e2f0925f4398",
"text": "The spacing of a fixed amount of study time across multiple sessions usually increases subsequent test performance*a finding known as the spacing effect. In the spacing experiment reported here, subjects completed multiple learning trials, and each included a study phase and a test. Once a subject achieved a perfect test, the remaining learning trials within that session comprised what is known as overlearning. The number of these overlearning trials was reduced when learning trials were spaced across multiple sessions rather than massed in a single session. In addition, the degree to which spacing reduced overlearning predicted the size of the spacing effect, which is consistent with the possibility that spacing increases subsequent recall by reducing the occurrence of overlearning. By this account, overlearning is an inefficient use of study time, and the efficacy of spacing depends at least partly on the degree to which it reduces the occurrence of overlearning.",
"title": ""
},
{
"docid": "a636f977eb29b870cefe040f3089de44",
"text": "We consider the network implications of virtual reality (VR) and augmented reality (AR). While there are intrinsic challenges for AR/VR applications to deliver on their promise, their impact on the underlying infrastructure will be undeniable. We look at augmented and virtual reality and consider a few use cases where they could be deployed. These use cases define a set of requirements for the underlying network. We take a brief look at potential network architectures. We then make the case for Information-centric networks as a potential architecture to assist the deployment of AR/VR and draw a list of challenges and future research directions for next generation networks to better support AR/VR.",
"title": ""
},
{
"docid": "3550dbe913466a675b621d476baba219",
"text": "Successful implementing and managing of change is urgently necessary for each adult educational organization. During the process, leading of the staff is becoming a key condition and the most significant factor. Beside certain personal traits of the leader, change management demands also certain leadership knowledges, skills, versatilities and behaviour which may even border on changing the organizational culture. The paper finds the significance of certain values and of organizational climate and above all the significance of leadership style which a leader will adjust to the staff and to the circumstances. The author presents a multiple qualitative case study of managing change in three adult educational organizations. The paper finds that factors of successful leading of change exist which represent an adequate approach to leading the staff during the introduction of changes in educational organizations. Its originality/value is in providing information on the important relationship between culture, leadership styles and leader’s behaviour as preconditions for successful implementing and managing of strategic change.",
"title": ""
},
{
"docid": "be079999e630df22254e7aa8a9ecdcae",
"text": "Strokes are one of the leading causes of death and disability in the UK. There are two main types of stroke: ischemic and hemorrhagic, with the majority of stroke patients suffering from the former. During an ischemic stroke, parts of the brain lose blood supply, and if not treated immediately, can lead to irreversible tissue damage and even death. Ischemic lesions can be detected by diffusion weighted magnetic resonance imaging (DWI), but localising and quantifying these lesions can be a time consuming task for clinicians. Work has already been done in training neural networks to segment these lesions, but these frameworks require a large amount of manually segmented 3D images, which are very time consuming to create. We instead propose to use past examinations of stroke patients which consist of DWIs, corresponding radiological reports and diagnoses in order to develop a learning framework capable of localising lesions. This is motivated by the fact that the reports summarise the presence, type and location of the ischemic lesion for each patient, and thereby provide more context than a single diagnostic label. Acute lesions prediction is aided by an attention mechanism which implicitly learns which regions within the DWI are most relevant to the classification.",
"title": ""
}
] | scidocsrr |
4e97169528430631823341734e2375ec | Rich Image Captioning in the Wild | [
{
"docid": "6a1e614288a7977b72c8037d9d7725fb",
"text": "We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.",
"title": ""
},
{
"docid": "30260d1a4a936c79e6911e1e91c3a84a",
"text": "Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-ofthe-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.",
"title": ""
}
] | [
{
"docid": "3a7a7fa5e41a6195ca16f172b72f89a1",
"text": "To integrate unpredictable human behavior in the assessment of active and passive pedestrian safety systems, we introduce a virtual reality (VR)-based pedestrian simulation system. The device uses the Xsens Motion Capture platform and can be used without additional infrastructure. To show the systems applicability for pedestrian behavior studies, we conducted a pilot study evaluating the degree of realism such a system can achieve in a typical unregulated pedestrian crossing scenario. Six participants had to estimate vehicle speeds and distances in four scenarios with varying gaps between vehicles. First results indicate an acceptable level of realism so that the device can be used for further user studies addressing pedestrian behavior, pedestrian interaction with (automated) vehicles, risk assessment and investigation of the pre-crash phase without the risk of injuries.",
"title": ""
},
{
"docid": "88cf953ba92b54f89cdecebd4153bee3",
"text": "In this paper, we propose a novel object detection framework named \"Deep Regionlets\" by establishing a bridge between deep neural networks and conventional detection schema for accurate generic object detection. Motivated by the abilities of regionlets for modeling object deformation and multiple aspect ratios, we incorporate regionlets into an end-to-end trainable deep learning framework. The deep regionlets framework consists of a region selection network and a deep regionlet learning module. Specifically, given a detection bounding box proposal, the region selection network provides guidance on where to select regions to learn the features from. The regionlet learning module focuses on local feature selection and transformation to alleviate local variations. To this end, we first realize non-rectangular region selection within the detection framework to accommodate variations in object appearance. Moreover, we design a “gating network\" within the regionlet leaning module to enable soft regionlet selection and pooling. The Deep Regionlets framework is trained end-to-end without additional efforts. We perform ablation studies and conduct extensive experiments on the PASCAL VOC and Microsoft COCO datasets. The proposed framework outperforms state-of-theart algorithms, such as RetinaNet and Mask R-CNN, even without additional segmentation labels.",
"title": ""
},
{
"docid": "b82c7c8f36ea16c29dfc5fa00a58b229",
"text": "Green cloud computing has become a major concern in both industry and academia, and efficient scheduling approaches show promising ways to reduce the energy consumption of cloud computing platforms while guaranteeing QoS requirements of tasks. Existing scheduling approaches are inadequate for realtime tasks running in uncertain cloud environments, because those approaches assume that cloud computing environments are deterministic and pre-computed schedule decisions will be statically followed during schedule execution. In this paper, we address this issue. We introduce an interval number theory to describe the uncertainty of the computing environment and a scheduling architecture to mitigate the impact of uncertainty on the task scheduling quality for a cloud data center. Based on this architecture, we present a novel scheduling algorithm (PRS) that dynamically exploits proactive and reactive scheduling methods, for scheduling real-time, aperiodic, independent tasks. To improve energy efficiency, we propose three strategies to scale up and down the system’s computing resources according to workload to improve resource utilization and to reduce energy consumption for the cloud data center. We conduct extensive experiments to compare PRS with four typical baseline scheduling algorithms. The experimental results show that PRS performs better than those algorithms, and can effectively improve the performance of a cloud data center.",
"title": ""
},
{
"docid": "215bb5273dbf5c301ae4170b5da39a34",
"text": "We describe a simple but effective method for cross-lingual syntactic transfer of dependency parsers, in the scenario where a large amount of translation data is not available. This method makes use of three steps: 1) a method for deriving cross-lingual word clusters, which can then be used in a multilingual parser; 2) a method for transferring lexical information from a target language to source language treebanks; 3) a method for integrating these steps with the density-driven annotation projection method of Rasooli and Collins (2015). Experiments show improvements over the state-of-the-art in several languages used in previous work, in a setting where the only source of translation data is the Bible, a considerably smaller corpus than the Europarl corpus used in previous work. Results using the Europarl corpus as a source of translation data show additional improvements over the results of Rasooli and Collins (2015). We conclude with results on 38 datasets from the Universal Dependencies corpora.",
"title": ""
},
{
"docid": "e2606242fcc89bfcf5c9c4cd71dd2c18",
"text": "This letter introduces the class of generalized punctured convolutional codes (GPCCs), which is broader than and encompasses the class of the standard punctured convolutional codes (PCCs). A code in this class can be represented by a trellis module, the GPCC trellis module, whose topology resembles that of the minimal trellis module. he GPCC trellis module for a PCC is isomorphic to the minimal trellis module. A list containing GPCCs with better distance spectrum than the best known PCCs with same code rate and trellis complexity is presented.",
"title": ""
},
{
"docid": "316e4fa32d0b000e6f833d146a9e0d80",
"text": "Magnetic equivalent circuits (MECs) are becoming an accepted alternative to electrical-equivalent lumped-parameter models and finite-element analysis (FEA) for simulating electromechanical devices. Their key advantages are moderate computational effort, reasonable accuracy, and flexibility in model size. MECs are easily extended into three dimensions. But despite the successful use of MEC as a modeling tool, a generalized 3-D formulation useable for a comprehensive computer-aided design tool has not yet emerged (unlike FEA, where general modeling tools are readily available). This paper discusses the framework of a 3-D MEC modeling approach, and presents the implementation of a variable-sized reluctance network distribution based on 3-D elements. Force calculation and modeling of moving objects are considered. Two experimental case studies, a soft-ferrite inductor and an induction machine, show promising results when compared to measurements and simulations of lumped parameter and FEA models.",
"title": ""
},
{
"docid": "b058bbc1485f99f37c0d72b960dd668b",
"text": "In two experiments short-term forgetting was investigated in a short-term cued recall task designed to examine proactive interference effects. Mixed modality study lists were tested at varying retention intervals using verbal and non-verbal distractor activities. When an interfering foil was read aloud and a target item read silently, strong PI effects were observed for both types of distractor activity. When the target was read aloud and followed by a verbal distractor activity, weak PI effects emerged. However, when a target item was read aloud and non-verbal distractor activity filled the retention interval, performance was immune to the effects of PI for at least eight seconds. The results indicate that phonological representations of items read aloud still influence performance after 15 seconds of distractor activity. Short-term Forgetting 3 Determinants of Short-term Forgetting: Decay, Retroactive Interference or Proactive Interference? Most current models of short-term memory assert that to-be-remembered items are represented in terms of easily degraded phonological representations. However, there is disagreement on how the traces become degraded. Some propose that trace degradation is due to decay brought about by the prevention of rehearsal (Baddeley, 1986; Burgess & Hitch, 1992; 1996), or a switch in attention (Cowan, 1993); others attribute degradation to retroactive interference (RI) from other list items (Nairne, 1990; Tehan & Fallon; in press; Tehan & Humphreys, 1998). We want to add proactive interference (PI) to the possible causes of short-term forgetting, and by showing how PI effects change as a function of the type of distractor task employed during a filled retention interval, we hope to evaluate the causes of trace degradation. By manipulating the type of distractor activity in a brief retention interval it is possible to test some of the assumptions about decay versus interference explanations of short-term forgetting. The decay position is quite straightforward. If rehearsal is prevented, then the trace should decay; the type of distractor activity should be immaterial as long as rehearsal is prevented. From the interference perspective both the Feature Model (Nairne, 1990) and the Tehan and Humphreys (1995,1998) connectionist model predict that there should be occasions where very little forgetting occurs. In the Feature Model items are represented as sets of modality dependent and modality independent features. Forgetting occurs when adjacent list items have common features. Some of the shared features of the first item are overwritten by the latter item, thereby producing a trace that bears only partial resemblance to the Short-term Forgetting 4 original item. One occasion in which interference would be minimized is when an auditory list is followed by a non-auditory distractor task. The modality dependent features of the list items would not be overwritten or degraded by the distractor activity because the modality dependent features of the list and distractor items are different to each other. By the same logic, a visually presented list should not be affected by an auditory distractor task, since modality specific features are again different in each case. In the Tehan and Humphreys (1995) approach, presentation modality is related to the strength of phonological representations that support recall. They assume that auditory activity produces stronger representations than does visual activity. Thus this model also predicts that when a list is presented auditorially, it will not be much affected by subsequent non-auditory distractor activity. However, in the case of a visual list with auditory distraction, the assumption would be that interference would be maximised. The phonological codes for the list items would be relatively weak in the first instance and a strong source of auditory retroactive interference follows. This prediction is the opposite of that derived from the Feature Model. Since PI effects appear to be sensitive to retention interval effects (Tehan & Humphreys, 1995; Wickens, Moody & Dow, 1981), we have chosen to employ a PI task to explore these differential predictions. We have recently developed a short-term cued recall task in which PI can easily be manipulated (Tehan & Humphreys, 1995; 1996; 1998). In this task, participants study a series of trials in which items are presented in blocks of four items with each trial consisting of either one or two blocks. Each trial has a target item that is an instance of either a taxonomic or rhyme category, and the category label is presented at test as a retrieval cue. The two-block trials are the important trials Short-term Forgetting 5 because it is in these trials that PI is manipulated. In these trials the two blocks are presented under directed forgetting instructions. That is, once participants find out that it is a two-block trial they are to forget the first block and remember the second block because the second block contains the target item. On control trials, all nontarget items in both blocks are unrelated to the target. On interference trials, a foil that is related to the target is embedded among three other to-be-forgotten fillers in the first block and the target is embedded among three unrelated filler items in the second block. Following the presentation of the second block the category cue is presented and subjects are asked to recall the word from the second block that is an instance of that category. Using this task we have been able to show that when taxonomic categories are used on an immediate test (e.g., dog is the foil, cat is the target and ANIMAL is the cue), performance is immune to PI. However, when recall is tested after a 2-second filled retention interval, PI effects are observed; target recall is depressed and the foil is often recalled instead of the target. In explaining these results, Tehan and Humphreys (1995) assumed that items were represented in terms of sets of features. The representation of an item was seen to involve both semantic and phonological features, with the phonological features playing a dominant role in item recall. They assumed that the cue would elicit the representations of the two items in the list, and that while the semantic features of both target and foil would be available, only the target would have active phonological features. Thus on an immediate test, knowing that the target ended in -at would make the task of discriminating between cat and dog relatively easy. On a delayed test they assumed that all phonological features were inactive and the absence of phonological information would make discrimination more difficult. Short-term Forgetting 6 A corollary of the Tehan and Humphreys (1995) assumption is that if phonological codes could be provided for a non-rhyming foil, then discrimination should again be problematic. Presentation modality is one variable that appears to produce differences in strength of phonological codes with reading aloud producing stronger representations than reading silently. Tehan and Humphreys (Experiment 5) varied the modality of the two blocks such that participants either read the first block silently and then read the second block aloud or vice versa. In the silent aloud condition performance was immune to PI. The assumption was that the phonological representation of the target item in the second block was very strong with the result that there were no problems in discrimination. However, PI effects were present in the aloud-silent condition. The phonological representation of the read-aloud foil appeared to serve as a strong source of competition to the read-silently target item. All the above research has been based on the premise that phonological representations for visually presented items are weak and rapidly lose their ability to support recall. This assumption seems tenable given that phonological similarity effects and phonological intrusion effects in serial recall are attenuated rapidly with brief periods of distractor activity (Conrad, 1967; Estes, 1973; Tehan & Humphreys, 1995). The cued recall experiments that have used a filled retention interval have always employed silent visual presentation of the study list and required spoken shadowing of the distractor items. That is, the phonological representations of both target and foil are assumed to be quite weak and the shadowing task would provide a strong source of interference. These are likely to be the conditions that produce maximum levels of PI. The patterns of PI may change with mixed modality study lists and alternative forms of distractor activity. For example, given a strong phonological representation of the target, weak representations of the foil and a weak source of Short-term Forgetting 7 retroactive interference, it might be possible to observe immunity to PI on a delayed test. The following experiments explore the relationship between presentation modality, distractor modality and PI Experiment 1 The Tehan and Humphreys (1995) mixed modality experiment indicated that PI effects were sensitive to the modalities of the first and second block of items. In the current study we use mixed modality study lists but this time include a two-second retention interval, the same as that used by Tehan and Humphreys. However, the modality of the distractor activity was varied as well. Participants either had to respond aloud verbally or make a manual response that did not involve any verbal output. From the Tehan and Humphreys perspective the assumption made is that the verbal distractor activity will produce more disruption to the phonological representation of the target item than will a non-verbal distractor activity and the PI will be observed. However, it is quite possible that with silent-aloud presentation and a non-verbal distractor activity immunity to PI might be maintained across a twosecond retention interval. From the Nairne perspective, interfe",
"title": ""
},
{
"docid": "b1239f2e9bfec604ac2c9851c8785c09",
"text": "BACKGROUND\nDecoding neural activities associated with limb movements is the key of motor prosthesis control. So far, most of these studies have been based on invasive approaches. Nevertheless, a few researchers have decoded kinematic parameters of single hand in non-invasive ways such as magnetoencephalogram (MEG) and electroencephalogram (EEG). Regarding these EEG studies, center-out reaching tasks have been employed. Yet whether hand velocity can be decoded using EEG recorded during a self-routed drawing task is unclear.\n\n\nMETHODS\nHere we collected whole-scalp EEG data of five subjects during a sequential 4-directional drawing task, and employed spatial filtering algorithms to extract the amplitude and power features of EEG in multiple frequency bands. From these features, we reconstructed hand movement velocity by Kalman filtering and a smoothing algorithm.\n\n\nRESULTS\nThe average Pearson correlation coefficients between the measured and the decoded velocities are 0.37 for the horizontal dimension and 0.24 for the vertical dimension. The channels on motor, posterior parietal and occipital areas are most involved for the decoding of hand velocity. By comparing the decoding performance of the features from different frequency bands, we found that not only slow potentials in 0.1-4 Hz band but also oscillatory rhythms in 24-28 Hz band may carry the information of hand velocity.\n\n\nCONCLUSIONS\nThese results provide another support to neural control of motor prosthesis based on EEG signals and proper decoding methods.",
"title": ""
},
{
"docid": "1fb87bc370023dc3fdfd9c9097288e71",
"text": "Protein is essential for living organisms, but digestibility of crude protein is poorly understood and difficult to predict. Nitrogen is used to estimate protein content because nitrogen is a component of the amino acids that comprise protein, but a substantial portion of the nitrogen in plants may be bound to fiber in an indigestible form. To estimate the amount of crude protein that is unavailable in the diets of mountain gorillas (Gorilla beringei) in Bwindi Impenetrable National Park, Uganda, foods routinely eaten were analyzed to determine the amount of nitrogen bound to the acid-detergent fiber residue. The amount of fiber-bound nitrogen varied among plant parts: herbaceous leaves 14.5+/-8.9% (reported as a percentage of crude protein on a dry matter (DM) basis), tree leaves (16.1+/-6.7% DM), pith/herbaceous peel (26.2+/-8.9% DM), fruit (34.7+/-17.8% DM), bark (43.8+/-15.6% DM), and decaying wood (85.2+/-14.6% DM). When crude protein and available protein intake of adult gorillas was estimated over a year, 15.1% of the dietary crude protein was indigestible. These results indicate that the proportion of fiber-bound protein in primate diets should be considered when estimating protein intake, food selection, and food/habitat quality.",
"title": ""
},
{
"docid": "60e56a59ecbdee87005407ed6a117240",
"text": "The visionary Steve Jobs said, “A lot of times, people don’t know what they want until you show it to them.” A powerful recommender system not only shows people similar items, but also helps them discover what they might like, and items that complement what they already purchased. In this paper, we attempt to instill a sense of “intention” and “style” into our recommender system, i.e., we aim to recommend items that are visually complementary with those already consumed. By identifying items that are visually coherent with a query item/image, our method facilitates exploration of the long tail items, whose existence users may be even unaware of. This task is formulated only recently by Julian et al. [1], with the input being millions of item pairs that are frequently viewed/bought together, entailing noisy style coherence. In the same work, the authors proposed a Mahalanobisbased transform to discriminate a given pair to be sharing a same style or not. Despite its success, we experimentally found that it’s only able to recommend items on the margin of different clusters, which leads to limited coverage of the items to be recommended. Another limitation is it totally ignores the existence of taxonomy information that is ubiquitous in many datasets like Amazon the authors experimented with. In this report, we propose two novel methods that make use of the hierarchical category metadata to overcome the limitations identified above. The main contributions are listed as following.",
"title": ""
},
{
"docid": "0c420c064519e15e071660c750c0b7e3",
"text": "In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.",
"title": ""
},
{
"docid": "4ca7e1893c0ab71d46af4954f7daf58e",
"text": "Identifying coordinate transformations that make strongly nonlinear dynamics approximately linear has the potential to enable nonlinear prediction, estimation, and control using linear theory. The Koopman operator is a leading data-driven embedding, and its eigenfunctions provide intrinsic coordinates that globally linearize the dynamics. However, identifying and representing these eigenfunctions has proven challenging. This work leverages deep learning to discover representations of Koopman eigenfunctions from data. Our network is parsimonious and interpretable by construction, embedding the dynamics on a low-dimensional manifold. We identify nonlinear coordinates on which the dynamics are globally linear using a modified auto-encoder. We also generalize Koopman representations to include a ubiquitous class of systems with continuous spectra. Our framework parametrizes the continuous frequency using an auxiliary network, enabling a compact and efficient embedding, while connecting our models to decades of asymptotics. Thus, we benefit from the power of deep learning, while retaining the physical interpretability of Koopman embeddings. It is often advantageous to transform a strongly nonlinear system into a linear one in order to simplify its analysis for prediction and control. Here the authors combine dynamical systems with deep learning to identify these hard-to-find transformations.",
"title": ""
},
{
"docid": "eeff1f2e12e5fc5403be8c2d7ca4d10c",
"text": "Optical Character Recognition (OCR) systems have been effectively developed for the recognition of printed script. The accuracy of OCR system mainly depends on the text preprocessing and segmentation algorithm being used. When the document is scanned it can be placed in any arbitrary angle which would appear on the computer monitor at the same angle. This paper addresses the algorithm for correction of skew angle generated in scanning of the text document and a novel profile based method for segmentation of printed text which separates the text in document image into lines, words and characters. Keywords—Skew correction, Segmentation, Text preprocessing, Horizontal Profile, Vertical Profile.",
"title": ""
},
{
"docid": "ce8914e02eeed8fb228b5b2950cf87de",
"text": "Different alternatives to detect and diagnose faults in induction machines have been proposed and implemented in the last years. The technology of artificial neural networks has been successfully used to solve the motor incipient fault detection problem. The characteristics, obtained by this technique, distinguish them from the traditional ones, which, in most cases, need that the machine which is being analyzed is not working to do the diagnosis. This paper reviews an artificial neural network (ANN) based technique to identify rotor faults in a three-phase induction motor. The main types of faults considered are broken bar and dynamic eccentricity. At light load, it is difficult to distinguish between healthy and faulty rotors because the characteristic broken rotor bar fault frequencies are very close to the fundamental component and their amplitudes are small in comparison. As a result, detection of the fault and classification of the fault severity under light load is almost impossible. In order to overcome this problem, the detection of rotor faults in induction machines is done by analysing the starting current using a newly developed quantification technique based on artificial neural networks.",
"title": ""
},
{
"docid": "33b4ba89053ed849d23758f6e3b06b09",
"text": "We develop a deep architecture to learn to find good correspondences for wide-baseline stereo. Given a set of putative sparse matches and the camera intrinsics, we train our network in an end-to-end fashion to label the correspondences as inliers or outliers, while simultaneously using them to recover the relative pose, as encoded by the essential matrix. Our architecture is based on a multi-layer perceptron operating on pixel coordinates rather than directly on the image, and is thus simple and small. We introduce a novel normalization technique, called Context Normalization, which allows us to process each data point separately while embedding global information in it, and also makes the network invariant to the order of the correspondences. Our experiments on multiple challenging datasets demonstrate that our method is able to drastically improve the state of the art with little training data.",
"title": ""
},
{
"docid": "2aae53713324b297f0e145ef8d808ce9",
"text": "In this paper some theoretical and (potentially) practical aspects of quantum computing are considered. Using the tools of transcendental number theory it is demonstrated that quantum Turing machines (QTM) with rational amplitudes are sufficient to define the class of bounded error quantum polynomial time (BQP) introduced by Bernstein and Vazirani [Proc. 25th ACM Symposium on Theory of Computation, 1993, pp. 11–20, SIAM J. Comput., 26 (1997), pp. 1411–1473]. On the other hand, if quantum Turing machines are allowed unrestricted amplitudes (i.e., arbitrary complex amplitudes), then the corresponding BQP class has uncountable cardinality and contains sets of all Turing degrees. In contrast, allowing unrestricted amplitudes does not increase the power of computation for error-free quantum polynomial time (EQP). Moreover, with unrestricted amplitudes, BQP is not equal to EQP. The relationship between quantum complexity classes and classical complexity classes is also investigated. It is shown that when quantum Turing machines are restricted to have transition amplitudes which are algebraic numbers, BQP, EQP, and nondeterministic quantum polynomial time (NQP) are all contained in PP, hence in P#P and PSPACE. A potentially practical issue of designing “machine independent” quantum programs is also addressed. A single (“almost universal”) quantum algorithm based on Shor’s method for factoring integers is developed which would run correctly on almost all quantum computers, even if the underlying unitary transformations are unknown to the programmer and the device builder.",
"title": ""
},
{
"docid": "f617b8b5c2c5fc7829cbcd0b2e64ed2d",
"text": "This paper proposes a novel lifelong learning (LL) approach to sentiment classification. LL mimics the human continuous learning process, i.e., retaining the knowledge learned from past tasks and use it to help future learning. In this paper, we first discuss LL in general and then LL for sentiment classification in particular. The proposed LL approach adopts a Bayesian optimization framework based on stochastic gradient descent. Our experimental results show that the proposed method outperforms baseline methods significantly, which demonstrates that lifelong learning is a promising research direction.",
"title": ""
},
{
"docid": "925709dfe0d0946ca06d05b290f2b9bd",
"text": "Mentalization, operationalized as reflective functioning (RF), can play a crucial role in the psychological mechanisms underlying personality functioning. This study aimed to: (a) study the association between RF, personality disorders (cluster level) and functioning; (b) investigate whether RF and personality functioning are influenced by (secure vs. insecure) attachment; and (c) explore the potential mediating effect of RF on the relationship between attachment and personality functioning. The Shedler-Westen Assessment Procedure (SWAP-200) was used to assess personality disorders and levels of psychological functioning in a clinical sample (N = 88). Attachment and RF were evaluated with the Adult Attachment Interview (AAI) and Reflective Functioning Scale (RFS). Findings showed that RF had significant negative associations with cluster A and B personality disorders, and a significant positive association with psychological functioning. Moreover, levels of RF and personality functioning were influenced by attachment patterns. Finally, RF completely mediated the relationship between (secure/insecure) attachment and adaptive psychological features, and thus accounted for differences in overall personality functioning. Lack of mentalization seemed strongly associated with vulnerabilities in personality functioning, especially in patients with cluster A and B personality disorders. These findings provide support for the development of therapeutic interventions to improve patients' RF.",
"title": ""
},
{
"docid": "9a1d6be6fbce508e887ee4e06a932cd2",
"text": "For ranked search in encrypted cloud data, order preserving encryption (OPE) is an efficient tool to encrypt relevance scores of the inverted index. When using deterministic OPE, the ciphertexts will reveal the distribution of relevance scores. Therefore, Wang et al. proposed a probabilistic OPE, called one-to-many OPE, for applications of searchable encryption, which can flatten the distribution of the plaintexts. In this paper, we proposed a differential attack on one-to-many OPE by exploiting the differences of the ordered ciphertexts. The experimental results show that the cloud server can get a good estimate of the distribution of relevance scores by a differential attack. Furthermore, when having some background information on the outsourced documents, the cloud server can accurately infer the encrypted keywords using the estimated distributions.",
"title": ""
},
{
"docid": "460e8daf5dfc9e45c3ade5860aa9cc57",
"text": "Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the planner. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games, with deeper trees often outperforming shallower ones. We also present a qualitative analysis that sheds light on the trees learned by TreeQN.",
"title": ""
}
] | scidocsrr |
34105146cfbde5353c1ec63e2112fcfb | Multi-Label Learning with Posterior Regularization | [
{
"docid": "b796a957545aa046bad14d44c4578700",
"text": "Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced “sibling” precision metric, where our method also obtains excellent results.",
"title": ""
},
{
"docid": "f59a7b518f5941cd42086dc2fe58fcea",
"text": "This paper contributes a novel algorithm for effective and computationally efficient multilabel classification in domains with large label sets L. The HOMER algorithm constructs a Hierarchy Of Multilabel classifiERs, each one dealing with a much smaller set of labels compared to L and a more balanced example distribution. This leads to improved predictive performance along with linear training and logarithmic testing complexities with respect to |L|. Label distribution from parent to children nodes is achieved via a new balanced clustering algorithm, called balanced k means.",
"title": ""
}
] | [
{
"docid": "4d7c0222317fbd866113e1a244a342f3",
"text": "A simple method of \"tuning up\" a multiple-resonant-circuit filter quickly and exactly is demonstrated. The method may be summarized as follows: Very loosely couple a detector to the first resonator of the filter; then, proceeding in consecutive order, tune all odd-numbered resonators for maximum detector output, and all even-numbered resonators for minimum detector output (always making sure that the resonator immediately following the one to be resonated is completely detuned). Also considered is the correct adjustment of the two other types of constants in a filter. Filter constants can always be reduced to only three fundamental types: f0, dr(1/Qr), and Kr(r+1). This is true whether a lumped-element 100-kc filter or a distributed-element 5,000-mc unit is being considered. dr is adjusted by considering the rth resonator as a single-tuned circuit (all other resonators completely detuned) and setting the bandwidth between the 3-db-down-points to the required value. Kr(r+1) is adjusted by considering the rth and (r+1)th adjacent resonators as a double-tuned circuit (all other resonators completely detuned) and setting the bandwidth between the resulting response peaks to the required value. Finally, all the required values for K and Q are given for an n-resonant-circuit filter that will produce the response (Vp/V)2=1 +(Δf/Δf3db)2n.",
"title": ""
},
{
"docid": "36ed684e39877873407efb809f3cd1dc",
"text": "A methodology to obtain wideband scattering diffusion based on periodic artificial surfaces is presented. The proposed surfaces provide scattering towards multiple propagation directions across an extremely wide frequency band. They comprise unit cells with an optimized geometry and arranged in a periodic lattice characterized by a repetition period larger than one wavelength which induces the excitation of multiple Floquet harmonics. The geometry of the elementary unit cell is optimized in order to minimize the reflection coefficient of the fundamental Floquet harmonic over a wide frequency band. The optimization of FSS geometry is performed through a genetic algorithm in conjunction with periodic Method of Moments. The design method is verified through full-wave simulations and measurements. The proposed solution guarantees very good performance in terms of bandwidth-thickness ratio and removes the need of a high-resolution printing process.",
"title": ""
},
{
"docid": "4dcdb2520ec5f9fc9c32f2cbb343808c",
"text": "Shannon’s mathematical theory of communication defines fundamental limits on how much information can be transmitted between the different components of any man-made or biological system. This paper is an informal but rigorous introduction to the main ideas implicit in Shannon’s theory. An annotated reading list is provided for further reading.",
"title": ""
},
{
"docid": "d395193924613f6818511650d24cf9ae",
"text": "Assortment planning of substitutable products is a major operational issue that arises in many industries, such as retailing, airlines and consumer electronics. We consider a single-period joint assortment and inventory planning problem under dynamic substitution with stochastic demands, and provide complexity and algorithmic results as well as insightful structural characterizations of near-optimal solutions for important variants of the problem. First, we show that the assortment planning problem is NP-hard even for a very simple consumer choice model, where each customer is willing to buy only two products. In fact, we show that the problem is hard to approximate within a factor better than 1− 1/e. Secondly, we show that for several interesting and practical choice models, one can devise a polynomial-time approximation scheme (PTAS), i.e., the problem can be solved efficiently to within any level of accuracy. To the best of our knowledge, this is the first efficient algorithm with provably near-optimal performance guarantees for assortment planning problems under dynamic substitution. Quite surprisingly, the algorithm we propose stocks only a constant number of different product types; this constant depends only on the desired accuracy level. This provides an important managerial insight that assortments with a relatively small number of product types can obtain almost all of the potential revenue. Furthermore, we show that our algorithm can be easily adapted for more general choice models, and present numerical experiments to show that it performs significantly better than other known approaches.",
"title": ""
},
{
"docid": "2e5981a41d13ee2d588ee0e9fe04e1ec",
"text": "Malicious software (malware) has been extensively employed for illegal purposes and thousands of new samples are discovered every day. The ability to classify samples with similar characteristics into families makes possible to create mitigation strategies that work for a whole class of programs. In this paper, we present a malware family classification approach using VGG16 deep neural network’s bottleneck features. Malware samples are represented as byteplot grayscale images and the convolutional layers of a VGG16 deep neural network pre-trained on the ImageNet dataset is used for bottleneck features extraction. These features are used to train a SVM classifier for the malware family classification task. The experimental results on a dataset comprising 10,136 samples from 20 different families showed that our approach can effectively be used to classify malware families with an accuracy of 92.97%, outperforming similar approaches proposed in the literature which require feature engineering and considerable domain expertise.",
"title": ""
},
{
"docid": "a5ee673c895bac1a616bb51439461f5f",
"text": "OBJECTIVES\nTo summarise logistical aspects of recently completed systematic reviews that were registered in the International Prospective Register of Systematic Reviews (PROSPERO) registry to quantify the time and resources required to complete such projects.\n\n\nDESIGN\nMeta-analysis.\n\n\nDATA SOURCES AND STUDY SELECTION\nAll of the 195 registered and completed reviews (status from the PROSPERO registry) with associated publications at the time of our search (1 July 2014).\n\n\nDATA EXTRACTION\nAll authors extracted data using registry entries and publication information related to the data sources used, the number of initially retrieved citations, the final number of included studies, the time between registration date to publication date and number of authors involved for completion of each publication. Information related to funding and geographical location was also recorded when reported.\n\n\nRESULTS\nThe mean estimated time to complete the project and publish the review was 67.3 weeks (IQR=42). The number of studies found in the literature searches ranged from 27 to 92 020; the mean yield rate of included studies was 2.94% (IQR=2.5); and the mean number of authors per review was 5, SD=3. Funded reviews took significantly longer to complete and publish (mean=42 vs 26 weeks) and involved more authors and team members (mean=6.8 vs 4.8 people) than those that did not report funding (both p<0.001).\n\n\nCONCLUSIONS\nSystematic reviews presently take much time and require large amounts of human resources. In the light of the ever-increasing volume of published studies, application of existing computing and informatics technology should be applied to decrease this time and resource burden. We discuss recently published guidelines that provide a framework to make finding and accessing relevant literature less burdensome.",
"title": ""
},
{
"docid": "3613ae9cfcadee0053a270fe73c6e069",
"text": "Depth-map merging approaches have become more and more popular in multi-view stereo (MVS) because of their flexibility and superior performance. The quality of depth map used for merging is vital for accurate 3D reconstruction. While traditional depth map estimation has been performed in a discrete manner, we suggest the use of a continuous counterpart. In this paper, we first integrate silhouette information and epipolar constraint into the variational method for continuous depth map estimation. Then, several depth candidates are generated based on a multiple starting scales (MSS) framework. From these candidates, refined depth maps for each view are synthesized according to path-based NCC (normalized cross correlation) metric. Finally, the multiview depth maps are merged to produce 3D models. Our algorithm excels at detail capture and produces one of the most accurate results among the current algorithms for sparse MVS datasets according to the Middlebury benchmark. Additionally, our approach shows its outstanding robustness and accuracy in free-viewpoint video scenario.",
"title": ""
},
{
"docid": "eb9459d0eb18f0e49b3843a6036289f9",
"text": "Experimental research has had a long tradition in psychology and education. When psychology emerged as an infant science during the 1900s, it modeled its research methods on the established paradigms of the physical sciences, which for centuries relied on experimentation to derive principals and laws. Subsequent reliance on experimental approaches was strengthened by behavioral approaches to psychology and education that predominated during the first half of this century. Thus, usage of experimentation in educational technology over the past 40 years has been influenced by developments in theory and research practices within its parent disciplines. In this chapter, we examine practices, issues, and trends related to the application of experimental research methods in educational technology. The purpose is to provide readers with sufficient background to understand and evaluate experimental designs encountered in the literature and to identify designs that will effectively address questions of interest in their own research. In an introductory section, we define experimental research, differentiate it from alternative approaches, and identify important concepts in its use (e.g., internal vs. external validity). We also suggest procedures for conducting experimental studies and publishing them in educational technology research journals. Next, we analyze uses of experimental methods by instructional researchers, extending the analyses of three decades ago by Clark and Snow (1975). In the concluding section, we turn to issues in using experimental research in educational technology, to include balancing internal and external validity, using multiple outcome measures to assess learning processes and products, using item responses vs. aggregate scores as dependent variables, reporting effect size as a complement to statistical significance, and media replications vs. media comparisons.",
"title": ""
},
{
"docid": "d2c4693856ae88c3c49b5fc7c4a7baf7",
"text": "In Jesuit universities, laypersons, who come from the same or different faith backgrounds or traditions, are considered as collaborators in mission. The Jesuits themselves support the contributions of the lay partners in realizing the mission of the Society of Jesus and recognize the important role that they play in education. This study aims to investigate and generate particular notions and understandings of lived experiences of being a lay partner in Jesuit universities in the Philippines, particularly those involved in higher education. Using the qualitative approach as introduced by grounded theorist Barney Glaser, the lay partners’ concept of being a partner, as lived in higher education, is generated systematically from the data collected in the field primarily through in-depth interviews, field notes and observations. Glaser’s constant comparative method of analysis of data is used going through the phases of open coding, theoretical coding, and selective coding from memoing to theoretical sampling to sorting and then writing. In this study, Glaser’s grounded theory as a methodology will provide a substantial insight into and articulation of the layperson’s actual experience of being a partner of the Jesuits in education. Such articulation provides a phenomenological approach or framework to an understanding of the meaning and core characteristics of JesuitLay partnership in Jesuit educational institution of higher learning in the country. This study is expected to provide a framework or model for lay partnership in academic institutions that have the same practice of having lay partners in mission. Keywords—Grounded theory, Jesuit mission in higher education, lay partner, lived experience. I. BACKGROUND AND INTRODUCTION HE Second Vatican Council document of the Roman Catholic Church establishes and defines the vocation and mission of lay members of the Church. It says that regardless of status, “all laypersons are called and obliged to engage in the apostolate of being laborers in the vineyard of the Lord, the world, to serve the Kingdom of God” [1, par.16]. Christifideles Laici, a post-synodal apostolic exhortation of Pope John Paul II, renews and reaffirms this same apostolic role of lay people in the Catholic Church saying that “[t]he call is a concern not only of Pastors, clergy, and men and women religious. The call is addressed to everyone: lay people as well are personally called by the Lord, from whom they receive a mission on behalf of the Church and the world” [2, par.2]. Catholic universities, “being born from the heart of the Church” [2, p.1] follow the same orientation and mission in affirming the apostolic roles that lay men and women could exercise in sharing with the works of the church on deepening faith and spirituality [3, par.25]. Janet Badong-Badilla is with the De La Salle University, Philippines (email: janet.badilla@yahoo.com). In Jesuit Catholic universities, the laypersons’ sense of mission and passion is recognized. The Jesuits say that “the call they have received is a call shared by them all together, Jesuits and lay” [4, par. 3]. Lay-Jesuit collaboration is in fact among the 28 distinctive characteristics of Jesuit education (CJE) and a positive goal that a Jesuit school tries to achieve in response to the Second Vatican Council and to recent General Congregations of the Society of Jesus [5]. In the Philippines, there are five Jesuit and Catholic universities that operate under the charism and educational principles of St. Ignatius of Loyola, the founder of the Society of Jesus. In a Jesuit university, the work in education is linked with Ignatian spirituality that inspires it [6, par. 13]. In managing human resources in a Jesuit school, the CJE document says that as much as the administration is able, “people chosen to join the educational community will be men and women capable of understanding its distinctive nature and of contributing to the implementation of characteristics that result from the Ignatian vision” [6, par. 122]. Laypersons in Jesuit universities, then, are expected to be able to share and carry on the kind of education that is based on the Ignatian tradition and spirituality. Fr. Pedro Arrupe, S.J., the former superior general of the Society of Jesus, in his closing session to the committee working on the document on the Characteristics of Jesuit Education, said that a Jesuit school, “if it is an authentic Jesuit school,” should manifest “Ignacianidad”: “...if our operation of the school flows out of the strengths drawn from our own specific charisma, if we emphasize our essential characteristics and our basic options then the education which our students receive should give them a certain \"Ignacianidad” [5, par. 3]. For Arrupe, Ignacianidad or the spirituality inspired by St. Ignatius is “a logical consequence of the fact that Jesuit schools live and operate out of its own charism” [5, par. 3]. Not only do the Jesuits support the contributions of lay partners in realizing the Society’s mission, but more importantly, they also recognize the powerful role that the lay partners in higher education play in the growth and revitalization of the congregation itself in the present time [7]. In an article in Conversations on Jesuit Higher Education, Fr. Howell writes: In a span of 50 years the Society of Jesus has been refounded. It is thriving. But it is thriving in a totally new and creative way. Its commitment to scholarship, for instance, is one of the strongest it has ever been, but carried out primarily through lay colleagues within the Jesuit university setting. Being a Lay Partner in Jesuit Higher Education in the Philippines: A Grounded Theory Application Janet B. Badong-Badilla T World Academy of Science, Engineering and Technology International Journal of Educational and Pedagogical Sciences",
"title": ""
},
{
"docid": "e04dda55d05d15e6a2fb3680a603bd43",
"text": "Multilayer perceptrons (MLPs) or neural networks are popular models used for nonlinear regression and classification tasks. As regressors, MLPs model the conditional distribution of the predictor variables Y given the input variables X . However, this predictive distribution is assumed to be unimodal (e.g. Gaussian). For tasks involving structured prediction, the conditional distribution should be multi-modal, resulting in one-to-many mappings. By using stochastic hidden variables rather than deterministic ones, Sigmoid Belief Nets (SBNs) can induce a rich multimodal distribution in the output space. However, previously proposed learning algorithms for SBNs are not efficient and unsuitable for modeling real-valued data. In this paper, we propose a stochastic feedforward network with hidden layers composed of both deterministic and stochastic variables. A new Generalized EM training procedure using importance sampling allows us to efficiently learn complicated conditional distributions. Our model achieves superior performance on synthetic and facial expressions datasets compared to conditional Restricted Boltzmann Machines and Mixture Density Networks. In addition, the latent features of our model improves classification and can learn to generate colorful textures of objects.",
"title": ""
},
{
"docid": "8452091115566adaad8a67154128dff8",
"text": "© The Ecological Society of America www.frontiersinecology.org T Millennium Ecosystem Assessment (MA) advanced a powerful vision for the future (MA 2005), and now it is time to deliver. The vision of the MA – and of the prescient ecologists and economists whose work formed its foundation – is a world in which people and institutions appreciate natural systems as vital assets, recognize the central roles these assets play in supporting human well-being, and routinely incorporate their material and intangible values into decision making. This vision is now beginning to take hold, fueled by innovations from around the world – from pioneering local leaders to government bureaucracies, and from traditional cultures to major corporations (eg a new experimental wing of Goldman Sachs; Daily and Ellison 2002; Bhagwat and Rutte 2006; Kareiva and Marvier 2007; Ostrom et al. 2007; Goldman et al. 2008). China, for instance, is investing over 700 billion yuan (about US$102.6 billion) in ecosystem service payments, in the current decade (Liu et al. 2008). The goal of the Natural Capital Project – a partnership between Stanford University, The Nature Conservancy, and World Wildlife Fund (www.naturalcapitalproject.org) – is to help integrate ecosystem services into everyday decision making around the world. This requires turning the valuation of ecosystem services into effective policy and finance mechanisms – a problem that, as yet, no one has solved on a large scale. A key challenge remains: relative to other forms of capital, assets embodied in ecosystems are often poorly understood, rarely monitored, and are undergoing rapid degradation (Heal 2000a; MA 2005; Mäler et al. 2008). The importance of ecosystem services is often recognized only after they have been lost, as was the case following Hurricane Katrina (Chambers et al. 2007). Natural capital, and the ecosystem services that flow from it, are usually undervalued – by governments, businesses, and the public – if indeed they are considered at all (Daily et al. 2000; Balmford et al. 2002; NRC 2005). Two fundamental changes need to occur in order to replicate, scale up, and sustain the pioneering efforts that are currently underway, to give ecosystem services weight in decision making. First, the science of ecosystem services needs to advance rapidly. In promising a return (of services) on investments in nature, the scientific community needs to deliver the knowledge and tools necessary to forecast and quantify this return. To help address this challenge, the Natural Capital Project has developed InVEST (a system for Integrated Valuation of Ecosystem ECOSYSTEM SERVICES ECOSYSTEM SERVICES ECOSYSTEM SERVICES",
"title": ""
},
{
"docid": "bcfc8566cf73ec7c002dcca671e3a0bd",
"text": "of the thoracic spine revealed a 1.1 cm intradural extramedullary mass at the level of the T2 vertebral body (Figure 1a). Spinal neurosurgery was planned due to exacerbation of her chronic back pain and progressive weakness of the lower limbs at 28 weeks ’ gestation. Emergent spinal decompression surgery was performed with gross total excision of the tumour. Doppler fl ow of the umbilical artery was used preoperatively and postoperatively to monitor fetal wellbeing. Th e histological examination revealed HPC, World Health Organization (WHO) grade 2 (Figure 1b). Complete recovery was seen within 1 week of surgery. Follow-up MRI demonstrated complete removal of the tumour. We recommended adjuvant external radiotherapy to the patient in the 3rd trimester of pregnancy due to HPC ’ s high risk of recurrence. However, the patient declined radiotherapy. Routine weekly obstetric assessments were performed following surgery. At the 37th gestational week, a 2,850 g, Apgar score 7 – 8, healthy infant was delivered by caesarean section, without need of admission to the neonatal intensive care unit. Adjuvant radiotherapy was administered to the patient in the postpartum period.",
"title": ""
},
{
"docid": "cd67a650969aa547cad8e825511c45c2",
"text": "We present DAPIP, a Programming-By-Example system that learns to program with APIs to perform data transformation tasks. We design a domainspecific language (DSL) that allows for arbitrary concatenations of API outputs and constant strings. The DSL consists of three family of APIs: regular expression-based APIs, lookup APIs, and transformation APIs. We then present a novel neural synthesis algorithm to search for programs in the DSL that are consistent with a given set of examples. The search algorithm uses recently introduced neural architectures to encode input-output examples and to model the program search in the DSL. We show that synthesis algorithm outperforms baseline methods for synthesizing programs on both synthetic and real-world benchmarks.",
"title": ""
},
{
"docid": "c0e4aa45a961aa69bc5c52e7cf7c889d",
"text": "CRM gains increasing importance due to intensive competition and saturated markets. With the purpose of retaining customers, academics as well as practitioners find it crucial to build a churn prediction model that is as accurate as possible. This study applies support vector machines in a newspaper subscription context in order to construct a churn model with a higher predictive performance. Moreover, a comparison is made between two parameter-selection techniques, needed to implement support vector machines. Both techniques are based on grid search and cross-validation. Afterwards, the predictive performance of both kinds of support vector machine models is benchmarked to logistic regression and random forests. Our study shows that support vector machines show good generalization performance when applied to noisy marketing data. Nevertheless, the parameter optimization procedure plays an important role in the predictive performance. We show that only when the optimal parameter selection procedure is applied, support vector machines outperform traditional logistic regression, whereas random forests outperform both kinds of support vector machines. As a substantive contribution, an overview of the most important churn drivers is given. Unlike ample research, monetary value and frequency do not play an important role in explaining churn in this subscription-services application. Even though most important churn predictors belong to the category of variables describing the subscription, the influence of several client/company-interaction variables can not be neglected.",
"title": ""
},
{
"docid": "864c2987092ca266b97ed11faec42aa3",
"text": "BACKGROUND\nAnxiety is the most common emotional response in women during delivery, which can be accompanied with adverse effects on fetus and mother.\n\n\nOBJECTIVES\nThis study was conducted to compare the effects of aromatherapy with rose oil and warm foot bath on anxiety in the active phase of labor in nulliparous women in Tehran, Iran.\n\n\nPATIENTS AND METHODS\nThis clinical trial study was performed after obtaining informed written consent on 120 primigravida women randomly assigned into three groups. The experimental group 1 received a 10-minute inhalation and footbath with oil rose. The experimental group 2 received a 10-minute warm water footbath. Both interventions were applied at the onset of active and transitional phases. Control group, received routine care in labor. Anxiety was assessed using visual analogous scale (VASA) at onset of active and transitional phases before and after the intervention. Statistical comparison was performed using SPSS software version 16 and P < 0.05 was considered significant.\n\n\nRESULTS\nAnxiety scores in the intervention groups in active phase after intervention were significantly lower than the control group (P < 0.001). Anxiety scores before and after intervention in intervention groups in transitional phase was significantly lower than the control group (P < 0.001).\n\n\nCONCLUSIONS\nUsing aromatherapy and footbath reduces anxiety in active phase in nulliparous women.",
"title": ""
},
{
"docid": "6a763e49cdfd41b28922eb536d9404ed",
"text": "With recent advances in computer vision and graphics, it is now possible to generate videos with extremely realistic synthetic faces, even in real time. Countless applications are possible, some of which raise a legitimate alarm, calling for reliable detectors of fake videos. In fact, distinguishing between original and manipulated video can be a challenge for humans and computers alike, especially when the videos are compressed or have low resolution, as it often happens on social networks. Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets. To this end, we introduce a novel face manipulation dataset of about half a million edited images (from over 1000 videos). The manipulations have been generated with a state-of-the-art face editing approach. It exceeds all existing video manipulation datasets by at least an order of magnitude. Using our new dataset, we introduce benchmarks for classical image forensic tasks, including classification and segmentation, considering videos compressed at various quality levels. In addition, we introduce a benchmark evaluation for creating indistinguishable forgeries with known ground truth; for instance with generative refinement models.",
"title": ""
},
{
"docid": "785ca963ea1f9715cdea9baede4c6081",
"text": "In this paper, factor analysis is applied on a set of data that was collected to study the effectiveness of 58 different agile practices. The analysis extracted 15 factors, each was associated with a list of practices. These factors with the associated practices can be used as a guide for agile process improvement. Correlations between the extracted factors were calculated, and the significant correlation findings suggested that people who applied iterative and incremental development and quality assurance practices had a high success rate, that communication with the customer was not very popular as it had negative correlations with governance and iterative and incremental development. Also, people who applied governance practices also applied quality assurance practices. Interestingly success rate related negatively with traditional analysis methods such as Gantt chart and detailed requirements specification.",
"title": ""
},
{
"docid": "555f06011d03cbe8dedb2fcd198540e9",
"text": "We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve highquality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.",
"title": ""
},
{
"docid": "ba89a62ac2d1b36738e521d4c5664de2",
"text": "Currently, the network traffic control systems are mainly composed of the Internet core and wired/wireless heterogeneous backbone networks. Recently, these packet-switched systems are experiencing an explosive network traffic growth due to the rapid development of communication technologies. The existing network policies are not sophisticated enough to cope with the continually varying network conditions arising from the tremendous traffic growth. Deep learning, with the recent breakthrough in the machine learning/intelligence area, appears to be a viable approach for the network operators to configure and manage their networks in a more intelligent and autonomous fashion. While deep learning has received a significant research attention in a number of other domains such as computer vision, speech recognition, robotics, and so forth, its applications in network traffic control systems are relatively recent and garnered rather little attention. In this paper, we address this point and indicate the necessity of surveying the scattered works on deep learning applications for various network traffic control aspects. In this vein, we provide an overview of the state-of-the-art deep learning architectures and algorithms relevant to the network traffic control systems. Also, we discuss the deep learning enablers for network systems. In addition, we discuss, in detail, a new use case, i.e., deep learning based intelligent routing. We demonstrate the effectiveness of the deep learning-based routing approach in contrast with the conventional routing strategy. Furthermore, we discuss a number of open research issues, which researchers may find useful in the future.",
"title": ""
},
{
"docid": "c460660e6ea1cc38f4864fe4696d3a07",
"text": "Background. The effective development of healthcare competencies poses great educational challenges. A possible approach to provide learning opportunities is the use of augmented reality (AR) where virtual learning experiences can be embedded in a real physical context. The aim of this study was to provide a comprehensive overview of the current state of the art in terms of user acceptance, the AR applications developed and the effect of AR on the development of competencies in healthcare. Methods. We conducted an integrative review. Integrative reviews are the broadest type of research review methods allowing for the inclusion of various research designs to more fully understand a phenomenon of concern. Our review included multi-disciplinary research publications in English reported until 2012. Results. 2529 research papers were found from ERIC, CINAHL, Medline, PubMed, Web of Science and Springer-link. Three qualitative, 20 quantitative and 2 mixed studies were included. Using a thematic analysis, we've described three aspects related to the research, technology and education. This study showed that AR was applied in a wide range of topics in healthcare education. Furthermore acceptance for AR as a learning technology was reported among the learners and its potential for improving different types of competencies. Discussion. AR is still considered as a novelty in the literature. Most of the studies reported early prototypes. Also the designed AR applications lacked an explicit pedagogical theoretical framework. Finally the learning strategies adopted were of the traditional style 'see one, do one and teach one' and do not integrate clinical competencies to ensure patients' safety.",
"title": ""
}
] | scidocsrr |
0b590d5f3bc41286db3de0ab3bf48308 | Neural Models for Key Phrase Extraction and Question Generation | [
{
"docid": "8f916f7be3048ae2a367096f4f82207d",
"text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.",
"title": ""
},
{
"docid": "86d58f4196ceb48e29cb143e6a157c22",
"text": "In this paper, we challenge a form of paragraph-to-question generation task. We propose a question generation system which can generate a set of comprehensive questions from a body of text. Besides the tree kernel functions to assess the grammatically of the generated questions, our goal is to rank them by using community-based question answering systems to calculate the importance of the generated questions. The main assumption behind our work is that each body of text is related to a topic of interest and it has a comprehensive information about the topic.",
"title": ""
}
] | [
{
"docid": "cdb937def5a92e3843a761f57278783e",
"text": "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.",
"title": ""
},
{
"docid": "5cd3abebf4d990bb9196b7019b29c568",
"text": "Wearing comfort of clothing is dependent on air permeability, moisture absorbency and wicking properties of fabric, which are related to the porosity of fabric. In this work, a plug-in is developed using Python script and incorporated in Abaqus/CAE for the prediction of porosity of plain weft knitted fabrics. The Plug-in is able to automatically generate 3D solid and multifilament weft knitted fabric models and accurately determine the porosity of fabrics in two steps. In this work, plain weft knitted fabrics made of monofilament, multifilament and spun yarn made of staple fibers were used to evaluate the effectiveness of the developed plug-in. In the case of staple fiber yarn, intra yarn porosity was considered in the calculation of porosity. The first step is to develop a 3D geometrical model of plain weft knitted fabric and the second step is to calculate the porosity of the fabric by using the geometrical parameter of 3D weft knitted fabric model generated in step one. The predicted porosity of plain weft knitted fabric is extracted in the second step and is displayed in the message area. The predicted results obtained from the plug-in have been compared with the experimental results obtained from previously developed models; they agreed well.",
"title": ""
},
{
"docid": "3f96a3cd2e3f795072567a3f3c8ccc46",
"text": "Good corporate reputations are critical because of their potential for value creation, but also because their intangible character makes replication by competing firms considerably more difficult. Existing empirical research confirms that there is a positive relationship between reputation and financial performance. This paper complements these findings by showing that firms with relatively good reputations are better able to sustain superior profit outcomes over time. In particular, we undertake an analysis of the relationship between corporate reputation and the dynamics of financial performance using two complementary dynamic models. We also decompose overall reputation into a component that is predicted by previous financial performance, and that which is ‘left over’, and find that each (orthogonal) element supports the persistence of above-average profits over time. Copyright 2002 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "e9b5dc63f981cc101521d8bbda1847d5",
"text": "The unsupervised image-to-image translation aims at finding a mapping between the source (A) and target (B) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings FAB : A → B and FBA : B → A is commonly used by the state-of-the-art methods, like CycleGAN (Zhu et al., 2017), to learn this translation by introducing cycle consistency requirement to the learning problem, i.e. FAB(FBA(B)) ≈ B and FBA(FAB(A)) ≈ A. Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce FBA to be an inverse operation to FAB. We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark data sets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-toend learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.",
"title": ""
},
{
"docid": "288845120cdf96a20850b3806be3d89a",
"text": "DNA replicases are multicomponent machines that have evolved clever strategies to perform their function. Although the structure of DNA is elegant in its simplicity, the job of duplicating it is far from simple. At the heart of the replicase machinery is a heteropentameric AAA+ clamp-loading machine that couples ATP hydrolysis to load circular clamp proteins onto DNA. The clamps encircle DNA and hold polymerases to the template for processive action. Clamp-loader and sliding clamp structures have been solved in both prokaryotic and eukaryotic systems. The heteropentameric clamp loaders are circular oligomers, reflecting the circular shape of their respective clamp substrates. Clamps and clamp loaders also function in other DNA metabolic processes, including repair, checkpoint mechanisms, and cell cycle progression. Twin polymerases and clamps coordinate their actions with a clamp loader and yet other proteins to form a replisome machine that advances the replication fork.",
"title": ""
},
{
"docid": "46ac5e994ca0bf0c3ea5dd110810b682",
"text": "The Geosciences and Geography are not just yet another application area for semantic technologies. The vast heterogeneity of the involved disciplines ranging from the natural sciences to the social sciences introduces new challenges in terms of interoperability. Moreover, the inherent spatial and temporal information components also require distinct semantic approaches. For these reasons, geospatial semantics, geo-ontologies, and semantic interoperability have been active research areas over the last 20 years. The geospatial semantics community has been among the early adopters of the Semantic Web, contributing methods, ontologies, use cases, and datasets. Today, geographic information is a crucial part of many central hubs on the Linked Data Web. In this editorial, we outline the research field of geospatial semantics, highlight major research directions and trends, and glance at future challenges. We hope that this text will be valuable for geoscientists interested in semantics research as well as knowledge engineers interested in spatiotemporal data. Introduction and Motivation While the Web has changed with the advent of the Social Web from mostly authoritative content towards increasing amounts of user generated information, it is essentially still about linked documents. These documents provide structure and context for the described data and easy their interpretation. In contrast, the evolving Data Web is about linking data, not documents. Such datasets are not bound to a specific document but can be easily combined and used outside of their original creation context. With a growth rate of millions of new facts encoded as RDF-triples per month, the Linked Data cloud allows users to answer complex queries spanning multiple, heterogeneous data sources from different scientific domains. However, this uncoupling of data from its creation context makes the interpretation of data challenging. Thus, research on semantic interoperability and ontologies is crucial to ensure consistency and meaningful results. Space and time are fundamental ordering principles to structure such data and provide an implicit context for their interpretation. Hence, it is not surprising that many linked datasets either contain spatiotemporal identifiers themselves or link out to such datasets, making them central hubs of the Linked Data cloud. Prominent examples include Geonames.org as well as the Linked Geo Data project, which provides a RDF serialization of Points Of Interest from Open Street Map [103]. Besides such Voluntary Geographic Information (VGI), governments 1570-0844/12/$27.50 c © 2012 – IOS Press and the authors. All rights reserved",
"title": ""
},
{
"docid": "2aefddf5e19601c8338f852811cebdee",
"text": "This paper presents a system that allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.",
"title": ""
},
{
"docid": "37572963400c8a78cef3cd4a565b328e",
"text": "The impressive performance of utilizing deep learning or neural network has attracted much attention in both the industry and research communities, especially towards computer vision aspect related applications. Despite its superior capability of learning, generalization and interpretation on various form of input, micro-expression analysis field is yet remains new in applying this kind of computing system in automated expression recognition system. A new feature extractor, BiVACNN is presented in this paper, where it first estimates the optical flow fields from the apex frame, then encode the flow fields features using CNN. Concretely, the proposed method consists of three stages: apex frame acquisition, multivariate features formation and feature learning using CNN. In the multivariate features formation stage, we attempt to derive six distinct features from the apex details, which include: the apex itself, difference between the apex and onset frames, horizontal optical flow, vertical optical flow, magnitude and orientation. It is demonstrated that utilizing the horizontal and vertical optical flow capable to achieve 80% recognition accuracy in CASME II and SMIC-HS databases.",
"title": ""
},
{
"docid": "9d37260c493c40523c268f6e54c8b4ea",
"text": "Social collaborative filtering recommender systems extend the traditional user-to-item interaction with explicit user-to-user relationships, thereby allowing for a wider exploration of correlations among users and items, that potentially lead to better recommendations. A number of methods have been proposed in the direction of exploring the social network, either locally (i.e. the vicinity of each user) or globally. In this paper, we propose a novel methodology for collaborative filtering social recommendation that tries to combine the merits of both the aforementioned approaches, based on the soft-clustering of the Friend-of-a-Friend (FoaF) network of each user. This task is accomplished by the non-negative factorization of the adjacency matrix of the FoaF graph, while the edge-centric logic of the factorization algorithm is ameliorated by incorporating more general structural properties of the graph, such as the number of edges and stars, through the introduction of the exponential random graph models. The preliminary results obtained reveal the potential of this idea.",
"title": ""
},
{
"docid": "6604a90f21796895300d37cefed5b6fa",
"text": "Distributed power system network is going to be complex, and it will require high-speed, reliable and secure communication systems for managing intermittent generation with coordination of centralised power generation, including load control. Cognitive Radio (CR) is highly favourable for providing communications in Smart Grid by using spectrum resources opportunistically. The IEEE 802.22 Wireless Regional Area Network (WRAN) having the capabilities of CR use vacant channels opportunistically in the frequency range of 54 MHz to 862 MHz occupied by TV band. A comprehensive review of using IEEE 802.22 for Field Area Network in power system network using spectrum sensing (CR based communication) is provided in this paper. The spectrum sensing technique(s) at Base Station (BS) and Customer Premises Equipment (CPE) for detecting the presence of incumbent in order to mitigate interferences is also studied. The availability of backup and candidate channels are updated during “Quite Period” for further use (spectrum switching and management) with geolocation capabilities. The use of IEEE 802.22 for (a) radio-scene analysis, (b) channel identification, and (c) dynamic spectrum management are examined for applications in power management.",
"title": ""
},
{
"docid": "e8403145a3d4a8a75348075410683e28",
"text": "This paper presents a current-reuse complementary-input (CRCI) telescopic-cascode chopper stabilized amplifier with low-noise low-power operation. The current-reuse complementary-input strategy doubles the amplifier's effective transconductance by full current-reuse between complementary inputs, which significantly improves the noise-power efficiency. A pseudo-resistor based integrator is used in the DC servo loop to generate a high-pass cutoff below 1 Hz. The proposed amplifier features a mid-band gain of 39.25 dB, bandwidth from 0.12 Hz to 7.6 kHz, and draws 2.57 μA from a 1.2-V supply and exhibits an input-referred noise of 3.57 μVrms integrated from 100 mHz to 100 kHz, corresponding to a noise efficiency factor (NEF) of 2.5. The amplifier is designed in 0.13 μm 8-metal CMOS process.",
"title": ""
},
{
"docid": "6c92652aa5bab1b25910d16cca697d48",
"text": "Intrusion detection has attracted a considerable interest from researchers and industries. The community, after many years of research, still faces the problem of building reliable and efficient IDS that are capable of handling large quantities of data, with changing patterns in real time situations. The work presented in this manuscript classifies intrusion detection systems (IDS). Moreover, a taxonomy and survey of shallow and deep networks intrusion detection systems is presented based on previous and current works. This taxonomy and survey reviews machine learning techniques and their performance in detecting anomalies. Feature selection which influences the effectiveness of machine learning (ML) IDS is discussed to explain the role of feature selection in the classification and training phase of ML IDS. Finally, a discussion of the false and true positive alarm rates is presented to help researchers model reliable and efficient machine learning based intrusion detection systems. Keywords— Shallow network, Deep networks, Intrusion detection, False positive alarm rates and True positive alarm rates 1.0 INTRODUCTION Computer networks have developed rapidly over the years contributing significantly to social and economic development. International trade, healthcare systems and military capabilities are examples of human activity that increasingly rely on networks. This has led to an increasing interest in the security of networks by industry and researchers. The importance of Intrusion Detection Systems (IDS) is critical as networks can become vulnerable to attacks from both internal and external intruders [1], [2]. An IDS is a detection system put in place to monitor computer networks. These have been in use since the 1980’s [3]. By analysing patterns of captured data from a network, IDS help to detect threats [4]. These threats can be devastating, for example, Denial of service (DoS) denies or prevents legitimate users resource on a network by introducing unwanted traffic [5]. Malware is another example, where attackers use malicious software to disrupt systems [6].",
"title": ""
},
{
"docid": "27401a6fe6a1edb5ba116db4bbdc7bcc",
"text": "Robot warehouse automation has attracted significant interest in recent years, perhaps most visibly in the Amazon Picking Challenge (APC) [1]. A fully autonomous warehouse pick-and-place system requires robust vision that reliably recognizes and locates objects amid cluttered environments, self-occlusions, sensor noise, and a large variety of objects. In this paper we present an approach that leverages multiview RGB-D data and self-supervised, data-driven learning to overcome those difficulties. The approach was part of the MIT-Princeton Team system that took 3rd- and 4th-place in the stowing and picking tasks, respectively at APC 2016. In the proposed approach, we segment and label multiple views of a scene with a fully convolutional neural network, and then fit pre-scanned 3D object models to the resulting segmentation to get the 6D object pose. Training a deep neural network for segmentation typically requires a large amount of training data. We propose a self-supervised method to generate a large labeled dataset without tedious manual segmentation. We demonstrate that our system can reliably estimate the 6D pose of objects under a variety of scenarios. All code, data, and benchmarks are available at http://apc.cs.princeton.edu/",
"title": ""
},
{
"docid": "8e64738b0d21db1ec5ef0220507f3130",
"text": "Automatic clothes search in consumer photos is not a trivial problem as photos are usually taken under completely uncontrolled realistic imaging conditions. In this paper, a novel framework is presented to tackle this issue by leveraging low-level features (e.g., color) and high-level features (attributes) of clothes. First, a content-based image retrieval(CBIR) approach based on the bag-of-visual-words (BOW) model is developed as our baseline system, in which a codebook is constructed from extracted dominant color patches. A reranking approach is then proposed to improve search quality by exploiting clothes attributes, including the type of clothes, sleeves, patterns, etc. The experiments on photo collections show that our approach is robust to large variations of images taken in unconstrained environment, and the reranking algorithm based on attribute learning significantly improves retrieval performance in combination with the proposed baseline.",
"title": ""
},
{
"docid": "e82e44e851486b557948a63366486fef",
"text": "v Combinatorial and algorithmic aspects of identifying codes in graphs Abstract: An identifying code is a set of vertices of a graph such that, on the one hand, each vertex out of the code has a neighbour in the code (the domination property), and, on the other hand, all vertices have a distinct neighbourhood within the code (the separation property). In this thesis, we investigate combinatorial and algorithmic aspects of identifying codes. For the combinatorial part, we rst study extremal questions by giving a complete characterization of all nite undirected graphs having their order minus one as the minimum size of an identifying code. We also characterize nite directed graphs, in nite undirected graphs and in nite oriented graphs having their whole vertex set as the unique identifying code. These results answer open questions that were previously studied in the literature. We then study the relationship between the minimum size of an identifying code and the maximum degree of a graph. In particular, we give several upper bounds for this parameter as a function of the order and the maximum degree. These bounds are obtained using two techniques. The rst one consists in the construction of independent sets satisfying certain properties, and the second one is the combination of two tools from the probabilistic method: the Lovász local lemma and a Cherno bound. We also provide constructions of graph families related to this type of upper bounds, and we conjecture that they are optimal up to an additive constant. We also present new lower and upper bounds for the minimum cardinality of an identifying code in speci c graph classes. We study graphs of girth at least 5 and of given minimum degree by showing that the combination of these two parameters has a strong in uence on the minimum size of an identifying code. We apply these results to random regular graphs. Then, we give lower bounds on the size of a minimum identifying code of interval and unit interval graphs. Finally, we prove several lower and upper bounds for this parameter when considering line graphs. The latter question is tackled using the new notion of an edge-identifying code. For the algorithmic part, it is known that the decision problem associated with the notion of an identifying code is NP-complete, even for restricted graph classes. We extend the known results to other classes such as split graphs, co-bipartite graphs, line graphs or interval graphs. To this end, we propose polynomial-time reductions from several classical hard algorithmic problems. These results show that in many graph classes, the identifying code problem is computationally more di cult than related problems (such as the dominating set problem). Furthermore, we extend the knowledge of the approximability of the optimization problem associated to identifying codes. We extend the known result of NP-hardness of approximating this problem within a sub-logarithmic factor (as a function of the instance graph) to bipartite, split and co-bipartite graphs, respectively. We also extend the known result of its APX-hardness for graphs of given maximum degree to a subclass of split graphs, bipartite graphs of maximum degree 4 and line graphs. Finally, we show the existence of a PTAS algorithm for unit interval graphs. An identifying code is a set of vertices of a graph such that, on the one hand, each vertex out of the code has a neighbour in the code (the domination property), and, on the other hand, all vertices have a distinct neighbourhood within the code (the separation property). In this thesis, we investigate combinatorial and algorithmic aspects of identifying codes. For the combinatorial part, we rst study extremal questions by giving a complete characterization of all nite undirected graphs having their order minus one as the minimum size of an identifying code. We also characterize nite directed graphs, in nite undirected graphs and in nite oriented graphs having their whole vertex set as the unique identifying code. These results answer open questions that were previously studied in the literature. We then study the relationship between the minimum size of an identifying code and the maximum degree of a graph. In particular, we give several upper bounds for this parameter as a function of the order and the maximum degree. These bounds are obtained using two techniques. The rst one consists in the construction of independent sets satisfying certain properties, and the second one is the combination of two tools from the probabilistic method: the Lovász local lemma and a Cherno bound. We also provide constructions of graph families related to this type of upper bounds, and we conjecture that they are optimal up to an additive constant. We also present new lower and upper bounds for the minimum cardinality of an identifying code in speci c graph classes. We study graphs of girth at least 5 and of given minimum degree by showing that the combination of these two parameters has a strong in uence on the minimum size of an identifying code. We apply these results to random regular graphs. Then, we give lower bounds on the size of a minimum identifying code of interval and unit interval graphs. Finally, we prove several lower and upper bounds for this parameter when considering line graphs. The latter question is tackled using the new notion of an edge-identifying code. For the algorithmic part, it is known that the decision problem associated with the notion of an identifying code is NP-complete, even for restricted graph classes. We extend the known results to other classes such as split graphs, co-bipartite graphs, line graphs or interval graphs. To this end, we propose polynomial-time reductions from several classical hard algorithmic problems. These results show that in many graph classes, the identifying code problem is computationally more di cult than related problems (such as the dominating set problem). Furthermore, we extend the knowledge of the approximability of the optimization problem associated to identifying codes. We extend the known result of NP-hardness of approximating this problem within a sub-logarithmic factor (as a function of the instance graph) to bipartite, split and co-bipartite graphs, respectively. We also extend the known result of its APX-hardness for graphs of given maximum degree to a subclass of split graphs, bipartite graphs of maximum degree 4 and line graphs. Finally, we show the existence of a PTAS algorithm for unit interval graphs.",
"title": ""
},
{
"docid": "bef317c450503a7f2c2147168b3dd51e",
"text": "With the development of the Internet of Things (IoT) and the usage of low-powered devices (sensors and effectors), a large number of people are using IoT systems in their homes and businesses to have more control over their technology. However, a key challenge of IoT systems is data protection in case the IoT device is lost, stolen, or used by one of the owner's friends or family members. The problem studied here is how to protect the access to data of an IoT system. To solve the problem, an attribute-based access control (ABAC) mechanism is applied to give the system the ability to apply policies to detect any unauthorized entry. Finally, a prototype was built to test the proposed solution. The evaluation plan was applied on the proposed solution to test the performance of the system.",
"title": ""
},
{
"docid": "3d2e82a0353d0b2803a579c413403338",
"text": "In 1994, nutritional facts panels became mandatory for processed foods to improve consumer access to nutritional information and to promote healthy food choices. Recent applied work is reviewed here in terms of how consumers value and respond to nutritional labels. We first summarize the health and nutritional links found in the literature and frame this discussion in terms of the obesity policy debate. Second, we discuss several approaches that have been used to empirically investigate consumer responses to nutritional labels: (a) surveys, (b) nonexperimental approaches utilizing revealed preferences, and (c) experimentbased approaches. We conclude with a discussion and suggest avenues of future research. INTRODUCTION How the provision of nutritional information affects consumers’ food choices and whether consumers value nutritional information are particularly pertinent questions in a country where obesity is pervasive. Firms typically have more information about the quality of their products than do consumers, creating a situation of asymmetric information. It is prohibitively costly for most consumers to acquire nutritional information independently of firms. Firms can use this Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 1 of 30 information to signal their quality and to receive quality premiums. However, firms that sell less nutritious products prefer to omit nutritional information. In this market setting, firms may not have an incentive to fully reveal their product quality, may try to highlight certain attributes in their advertising claims while shrouding others (Gabaix & Laibson 2006), or may provide information in a less salient fashion (Chetty et al. 2007). Mandatory nutritional labeling can fill this void of information provision by correcting asymmetric information and transforming an experience-good or a credence-good characteristic into search-good characteristics (Caswell & Mojduszka 1996). Golan et al. (2000) argue that the effectiveness of food labeling depends on firms’ incentives for information provision, government information requirements, and the role of third-party entities in standardizing and certifying the accuracy of the information. Yet nutritional information is valuable only if consumers use it in some fashion. Early advances in consumer choice theory, such as market goods possessing desirable characteristics (Lancaster 1966) or market goods used in conjunction with time to produce desirable commodities (Becker 1965), set the theoretical foundation for studying how market prices, household characteristics, incomes, nutrient content, and taste considerations interact with and influence consumer choice. LaFrance (1983) develops a theoretical framework and estimates the marginal value of nutrient versus taste parameters in an analytical approach that imposes a sufficient degree of restrictions to generality to be empirically feasible. Real or perceived tradeoffs between nutritional and taste or pleasure considerations imply that consumers will not necessarily make healthier choices. Reduced search costs mean that consumers can more easily make choices that maximize their utility. Foster & Just (1989) provide a framework in which to analyze the effect of information on consumer choice and welfare in this context. They argue that Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 2 of 30 when consumers are uncertain about product quality, the provision of information can help to better align choices with consumer preferences. However, consumers may not use nutritional labels because consumers still require time and effort to process the information. Reading a nutritional facts panel (NFP), for instance, necessitates that the consumer remove the product from the shelf and turn the product to read the nutritional information on the back or side. In addition, consumers often have difficulty evaluating the information provided on the NFP or how to relate it to a healthy diet. Berning et al. (2008) present a simple model of demand for nutritional information. The consumer chooses to consume goods and information to maximize utility subject to budget and time constraints, which include time to acquire and to process nutritional information. Consumers who have strong preferences for nutritional content will acquire more nutritional information. Alternatively, other consumers may derive more utility from appearance or taste. Following Becker & Murphy (1993), Berning et al. show that nutritional information may act as a complement to the consumption of products with unknown nutritional quality, similar to the way advertisements complement advertised goods. From a policy perspective, the rise in the U.S. obesity rate coupled with the asymmetry of information have resulted in changes in the regulatory environment. The U.S. Food and Drug Administration (FDA) is currently considering a change to the format and content of nutritional labels, originally implemented in 1994 to promote increased label use. Consumers’ general understanding of the link between food consumption and health, and widespread interest in the provision of nutritional information on food labels, is documented in the existing literature (e.g., Williams 2005, Grunert & Wills 2007). Yet only approximately half Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 3 of 30 of consumers claim to use NFPs when making food purchasing decisions (Blitstein & Evans 2006). Moreover, self-reported consumer use of nutritional labels has declined from 1995 to 2006, with the largest decline for younger age groups (20–29 years) and less educated consumers (Todd & Variyam 2008). This decline supports research findings that consumers prefer for short front label claims over the NFP’s lengthy back label explanations (e.g., Levy & Fein 1998, Wansink et al. 2004, Williams 2005, Grunert & Wills 2007). Furthermore, regulatory rules and enforcement policies may have induced firms to move away from reinforcing nutritional claims through advertising (e.g., Ippolito & Pappalardo 2002). Finally, critical media coverage of regulatory challenges (e.g., Nestle 2000) may have contributed to decreased labeling usage over time. Excellent review papers on this topic preceded and inspired this present review (e.g., Baltas 2001, Williams 2005, Drichoutis et al. 2006). In particular, Drichoutis et al. (2006) reviews the nutritional labeling literature and addresses specific issues regarding the determinants of label use, the debate on mandatory labeling, label formats preferred by consumers, and the effect of nutritional label use on purchase and dietary behavior. The current review article updates and complements these earlier reviews by focusing on recent work and highlighting major contributions in applied analyses on how consumers value, utilize, and respond to nutritional labels. We first cover the health and nutritional aspects of consumer food choices found in the literature to frame the discussion on nutritional labels in the context of the recent debate on obesity prevention policies. Second, we discuss the different empirical approaches that are utilized to investigate consumers’ response to and valuation of nutritional labels, classifying existing work into three categories according to the empirical strategy and data sources. First, we present findings based on consumer surveys and stated consumer responses to Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 4 of 30 labels. The second set of articles reviewed utilizes nonexperimental data and focuses on estimating consumer valuation of labels on the basis of revealed preferences. Here, the empirical strategy is structural, using hedonic methods, structural demand analyses, or discrete choice models and allowing for estimation of consumers’ willingness to pay (WTP) for nutritional information. The last set of empirical contributions discussed is based on experimental data, differentiating market-level and natural experiments from laboratory evidence. These studies employ mainly reduced-form approaches. Finally, we conclude with a discussion of avenues for future research. CONSUMER FOOD DEMAND, NUTRITIONAL LABELS, AND OBESITY PREVENTION The U.S. Department of Health and Public Services declared the reduction of obesity rates to less than 15% to be one of the national health objectives for 2010, yet in 2009 no state met these targets, with only two states reporting obesity rates less than 20% (CDC 2010). Researchers have studied and identified many contributing factors, such as the decreasing relative price of caloriedense food (Chou et al. 2004) and marketing practices that took advantage of behavioral reactions to food (Smith 2004). Other researchers argue that an increased prevalence of fast food (Cutler et al. 2003) and increased portion sizes in restaurants and at home (Wansink & van Ittersum 2007) may be the driving factors of increased food consumption. In addition, food psychologists have focused on changes in the eating environment, pointing to distractions such as television, books, conversation with others, or preoccupation with work as leading to increased food intake (Wansink 2004). Although each of these factors potentially contributes to the obesity epidemic, they do not necessarily mean that consumers wi",
"title": ""
},
{
"docid": "c3e2ceebd3868dd9fff2a87fdd339dce",
"text": "Augmented Reality (AR) holds unique and promising potential to bridge between real-world activities and digital experiences, allowing users to engage their imagination and boost their creativity. We propose the concept of Augmented Creativity as employing ar on modern mobile devices to enhance real-world creative activities, support education, and open new interaction possibilities. We present six prototype applications that explore and develop Augmented Creativity in different ways, cultivating creativity through ar interactivity. Our coloring book app bridges coloring and computer-generated animation by allowing children to create their own character design in an ar setting. Our music apps provide a tangible way for children to explore different music styles and instruments in order to arrange their own version of popular songs. In the gaming domain, we show how to transform passive game interaction into active real-world movement that requires coordination and cooperation between players, and how ar can be applied to city-wide gaming concepts. We employ the concept of Augmented Creativity to authoring interactive narratives with an interactive storytelling framework. Finally, we examine how Augmented Creativity can provide a more compelling way to understand complex concepts, such as computer programming.",
"title": ""
},
{
"docid": "583d2f754a399e8446855b165407f6ee",
"text": "In this work, classification of cellular structures in the high resolutional histopathological images and the discrimination of cellular and non-cellular structures have been investigated. The cell classification is a very exhaustive and time-consuming process for pathologists in medicine. The development of digital imaging in histopathology has enabled the generation of reasonable and effective solutions to this problem. Morever, the classification of digital data provides easier analysis of cell structures in histopathological data. Convolutional neural network (CNN), constituting the main theme of this study, has been proposed with different spatial window sizes in RGB color spaces. Hence, to improve the accuracies of classification results obtained by supervised learning methods, spatial information must also be considered. So, spatial dependencies of cell and non-cell pixels can be evaluated within different pixel neighborhoods in this study. In the experiments, the CNN performs superior than other pixel classification methods including SVM and k-Nearest Neighbour (k-NN). At the end of this paper, several possible directions for future research are also proposed.",
"title": ""
},
{
"docid": "20e5855c2bab00b7f91cca5d7bd07245",
"text": "The increase in the number and complexity of biological databases has raised the need for modern and powerful data analysis tools and techniques. In order to fulfill these requirements, the machine learning discipline has become an everyday tool in bio-laboratories. The use of machine learning techniques has been extended to a wide spectrum of bioinformatics applications. It is broadly used to investigate the underlying mechanisms and interactions between biological molecules in many diseases, and it is an essential tool in any biomarker discovery process. In this chapter, we provide a basic taxonomy of machine learning algorithms, and the characteristics of main data preprocessing, supervised classification, and clustering techniques are shown. Feature selection, classifier evaluation, and two supervised classification topics that have a deep impact on current bioinformatics are presented. We make the interested reader aware of a set of popular web resources, open source software tools, and benchmarking data repositories that are frequently used by the machine",
"title": ""
}
] | scidocsrr |
c1c84ea618835e7592aedf1fdf0bb1c2 | Improving the Reproducibility of PAN's Shared Tasks: - Plagiarism Detection, Author Identification, and Author Profiling | [
{
"docid": "c43785187ce3c4e7d1895b628f4a2df3",
"text": "In this paper we focus on the connection between age and language use, exploring age prediction of Twitter users based on their tweets. We discuss the construction of a fine-grained annotation effort to assign ages and life stages to Twitter users. Using this dataset, we explore age prediction in three different ways: classifying users into age categories, by life stages, and predicting their exact age. We find that an automatic system achieves better performance than humans on these tasks and that both humans and the automatic systems have difficulties predicting the age of older people. Moreover, we present a detailed analysis of variables that change with age. We find strong patterns of change, and that most changes occur at young ages.",
"title": ""
},
{
"docid": "515e4ae8fabe93495d8072fe984d8bb6",
"text": "Most studies in statistical or machine learning based authorship attribution focus on two or a few authors. This leads to an overestimation of the importance of the features extracted from the training data and found to be discriminating for these small sets of authors. Most studies also use sizes of training data that are unrealistic for situations in which stylometry is applied (e.g., forensics), and thereby overestimate the accuracy of their approach in these situations. A more realistic interpretation of the task is as an authorship verification problem that we approximate by pooling data from many different authors as negative examples. In this paper, we show, on the basis of a new corpus with 145 authors, what the effect is of many authors on feature selection and learning, and show robustness of a memory-based learning approach in doing authorship attribution and verification with many authors and limited training data when compared to eager learning methods such as SVMs and maximum entropy learning.",
"title": ""
}
] | [
{
"docid": "503277b20b3fd087df5c91c1a7c7a173",
"text": "Among vertebrates, only microchiropteran bats, cetaceans and some rodents are known to produce and detect ultrasounds (frequencies greater than 20 kHz) for the purpose of communication and/or echolocation, suggesting that this capacity might be restricted to mammals. Amphibians, reptiles and most birds generally have limited hearing capacity, with the ability to detect and produce sounds below ∼12 kHz. Here we report evidence of ultrasonic communication in an amphibian, the concave-eared torrent frog (Amolops tormotus) from Huangshan Hot Springs, China. Males of A. tormotus produce diverse bird-like melodic calls with pronounced frequency modulations that often contain spectral energy in the ultrasonic range. To determine whether A. tormotus communicates using ultrasound to avoid masking by the wideband background noise of local fast-flowing streams, or whether the ultrasound is simply a by-product of the sound-production mechanism, we conducted acoustic playback experiments in the frogs' natural habitat. We found that the audible as well as the ultrasonic components of an A. tormotus call can evoke male vocal responses. Electrophysiological recordings from the auditory midbrain confirmed the ultrasonic hearing capacity of these frogs and that of a sympatric species facing similar environmental constraints. This extraordinary upward extension into the ultrasonic range of both the harmonic content of the advertisement calls and the frog's hearing sensitivity is likely to have co-evolved in response to the intense, predominantly low-frequency ambient noise from local streams. Because amphibians are a distinct evolutionary lineage from microchiropterans and cetaceans (which have evolved ultrasonic hearing to minimize congestion in the frequency bands used for sound communication and to increase hunting efficacy in darkness), ultrasonic perception in these animals represents a new example of independent evolution.",
"title": ""
},
{
"docid": "7458ca6334cf5f02c6a30466cd8de2ce",
"text": "BACKGROUND\nFecal incontinence (FI) in children is frequently encountered in pediatric practice, and often occurs in combination with urinary incontinence. In most cases, FI is constipation-associated, but in 20% of children presenting with FI, no constipation or other underlying cause can be found - these children suffer from functional nonretentive fecal incontinence (FNRFI).\n\n\nOBJECTIVE\nTo summarize the evidence-based recommendations of the International Children's Continence Society for the evaluation and management of children with FNRFI.\n\n\nRECOMMENDATIONS\nFunctional nonretentive fecal incontinence is a clinical diagnosis based on medical history and physical examination. Except for determining colonic transit time, additional investigations are seldom indicated in the workup of FNRFI. Treatment should consist of education, a nonaccusatory approach, and a toileting program encompassing a daily bowel diary and a reward system. Special attention should be paid to psychosocial or behavioral problems, since these frequently occur in affected children. Functional nonretentive fecal incontinence is often difficult to treat, requiring prolonged therapies with incremental improvement on treatment and frequent relapses.",
"title": ""
},
{
"docid": "7087355045b28921ebc63296780415d9",
"text": "The Indian regional navigational satellite system (IRNSS) developed by the Indian Space Research Organization (ISRO) is an autonomous regional satellite navigation system which is under the complete control of Government of India. The requirement of indigenous regional navigational satellite system is driven by the fact that access to Global Navigation Satellite System, like GPS is not guaranteed in hostile situations. Design of IRNSS antenna at user segment is mandatory for Indian region. The IRNSS satellites will be placed at a higher geostationary orbit to have a larger signal footprint and minimum satellites for regional mapping. IRNSS signals will consist of a Special Positioning Service and a Precision Service. Both will be carried on L5 band (1176.45 MHz) and S band (2492.08 MHz). As it is be long range communication system needs high frequency signals and high gain receiving antennas. So, different antennas can be designed to enhance the gain and directivity. Based on this the rectangular Microstrip patch antenna, planar array of patch antennas and planar, wideband feed slot spiral antenna are designed by using various software simulations. Use of array of spiral antennas will increase the gain position. Spiral antennas are comparatively small size and these antennas with its windings making it an extremely small structure. The performance of the designed antennas was compared in terms of return loss, bandwidth, directivity, radiation pattern and gain. In this paper, Review results of all antennas designed for IRNSS have presented.",
"title": ""
},
{
"docid": "f6d87c501bae68fe1b788e5b01bd17cc",
"text": "The matrix completion problem consists of finding or approximating a low-rank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical non-linear conjugate gradients, developed within the framework of retraction-based optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this lowrank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Additionally, we derive second-order models that can be used in Newton’s method based on approximating the exponential map on this manifold to second order. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach scales very well for large-scale problems and compares favorable with the state-of-the-art, while outperforming most existing solvers.",
"title": ""
},
{
"docid": "f5360ff8d8cc5d0a852cebeb09a29a98",
"text": "In this paper, we propose a collaborative deep reinforcement learning (C-DRL) method for multi-object tracking. Most existing multiobject tracking methods employ the tracking-by-detection strategy which first detects objects in each frame and then associates them across different frames. However, the performance of these methods rely heavily on the detection results, which are usually unsatisfied in many real applications, especially in crowded scenes. To address this, we develop a deep prediction-decision network in our C-DRL, which simultaneously detects and predicts objects under a unified network via deep reinforcement learning. Specifically, we consider each object as an agent and track it via the prediction network, and seek the optimal tracked results by exploiting the collaborative interactions of different agents and environments via the decision network.Experimental results on the challenging MOT15 and MOT16 benchmarks are presented to show the effectiveness of our approach.",
"title": ""
},
{
"docid": "a7e3338d682278643fdd7eefa795f3f3",
"text": "State of the art models using deep neural networks have become very good in learning an accurate mapping from inputs to outputs. However, they still lack generalization capabilities in conditions that differ from the ones encountered during training. This is even more challenging in specialized, and knowledge intensive domains, where training data is limited. To address this gap, we introduce MedNLI1 – a dataset annotated by doctors, performing a natural language inference task (NLI), grounded in the medical history of patients. We present strategies to: 1) leverage transfer learning using datasets from the open domain, (e.g. SNLI) and 2) incorporate domain knowledge from external data and lexical sources (e.g. medical terminologies). Our results demonstrate performance gains using both strategies.",
"title": ""
},
{
"docid": "e584e7e0c96bc78bc2b2166d1af272a6",
"text": "In this paper we investigate the problem of inducing a distribution over three-dimensional structures given two-dimensional views of multiple objects taken from unknown viewpoints. Our approach called \"projective generative adversarial networks\" (PrGANs) trains a deep generative model of 3D shapes whose projections match the distributions of the input 2D views. The addition of a projection module allows us to infer the underlying 3D shape distribution without using any 3D, viewpoint information, or annotation during the learning phase. We show that our approach produces 3D shapes of comparable quality to GANs trained on 3D data for a number of shape categories including chairs, airplanes, and cars. Experiments also show that the disentangled representation of 2D shapes into geometry and viewpoint leads to a good generative model of 2D shapes. The key advantage is that our model allows us to predict 3D, viewpoint, and generate novel views from an input image in a completely unsupervised manner.",
"title": ""
},
{
"docid": "fff6c1ca2fde7f50c3654f1953eb97e6",
"text": "This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach.",
"title": ""
},
{
"docid": "1bc285b8bd63e701a55cf956179abbac",
"text": "A new anode/cathode design and process concept for thin wafer based silicon devices is proposed to achieve the goal of providing improved control for activating the injecting layer and forming a good ohmic contact. The concept is based on laser annealing in a melting regime of a p-type anode layer covered with a thin titanium layer with high melting temperature and high laser light absorption. The improved activation control of a boron anode layer is demonstrated on the Soft Punch Through IGBT with a nominal breakdown voltage of 1700 V. Furthermore, the silicidation of the titanium absorbing layer, which is necessary for achieving a low VCE ON, is discussed in terms of optimization of the device electrical parameters.",
"title": ""
},
{
"docid": "8877d6753d6b7cd39ba36c074ca56b00",
"text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.",
"title": ""
},
{
"docid": "c182be9222690ffe1c94729b2b79d8ed",
"text": "A balanced level of muscle strength between the different parts of the scapular muscles is important in optimizing performance and preventing injuries in athletes. Emerging evidence suggests that many athletes lack balanced strength in the scapular muscles. Evidence-based recommendations are important for proper exercise prescription. This study determines scapular muscle activity during strengthening exercises for scapular muscles performed at low and high intensities (Borg CR10 levels 3 and 8). Surface electromyography (EMG) from selected scapular muscles was recorded during 7 strengthening exercises and expressed as a percentage of the maximal EMG. Seventeen women (aged 24-55 years) without serious disorders participated. Several of the investigated exercises-press-up, prone flexion, one-arm row, and prone abduction at Borg 3 and press-up, push-up plus, and one-arm row at Borg 8-predominantly activated the lower trapezius over the upper trapezius (activation difference [Δ] 13-30%). Likewise, several of the exercises-push-up plus, shoulder press, and press-up at Borg 3 and 8-predominantly activated the serratus anterior over the upper trapezius (Δ18-45%). The middle trapezius was activated over the upper trapezius by one-arm row and prone abduction (Δ21-30%). Although shoulder press and push-up plus activated the serratus anterior over the lower trapezius (Δ22-33%), the opposite was true for prone flexion, one-arm row, and prone abduction (Δ16-54%). Only the press-up and push-up plus activated both the lower trapezius and the serratus anterior over the upper trapezius. In conclusion, several of the investigated exercises both at low and high intensities predominantly activated the serratus anterior and lower and middle trapezius, respectively, over the upper trapezius. These findings have important practical implications for exercise prescription for optimal shoulder function. For example, both workers with neck pain and athletes at risk of shoulder impingement (e.g., overhead sports) should perform push-up plus and press-ups to specifically strengthen the serratus anterior and lower trapezius.",
"title": ""
},
{
"docid": "a01abbced99f14ae198c6abef6454126",
"text": "Coreference Resolution September 2014 Present Kevin Clark, Christopher Manning Stanford University Developed coreference systems that build up coreference chains with agglomerative clustering. These models are more accurate than the mention-pair systems commonly used in prior work. Developed neural coreference systems that do not require the large number of complex hand-engineered features commonly found in statistical coreference systems. Applied imitation and reinforcement learning to directly optimize coreference systems for evaluation metrics instead of relying on hand-tuned heuristic loss functions. Made substantial advancements to the current state-of-the-art for English and Chinese coreference. Publicly released all models through Stanford’s CoreNLP.",
"title": ""
},
{
"docid": "4ec91fd15f10c1c8616a890447c2b063",
"text": "Texture is an important visual clue for various classification and segmentation tasks in the scene understanding challenge. Today, successful deployment of deep learning algorithms for texture recognition leads to tremendous precisions on standard datasets. In this paper, we propose a new learning framework to train deep neural networks in parallel and with variable depth for texture recognition. Our framework learns scales, orientations and resolutions of texture filter banks. Due to the learning of parameters not the filters themselves, computational costs are highly reduced. It is also capable of extracting very deep features through distributed computing architectures. Our experiments on publicly available texture datasets show significant improvements in the recognition performance over other deep local descriptors in recently published benchmarks.",
"title": ""
},
{
"docid": "a79f9ad24c4f047d8ace297b681ccf0a",
"text": "BACKGROUND\nLe Fort III distraction advances the Apert midface but leaves the central concavity and vertical compression untreated. The authors propose that Le Fort II distraction and simultaneous zygomatic repositioning as a combined procedure can move the central midface and lateral orbits in independent vectors in order to improve the facial deformity. The purpose of this study was to determine whether this segmental movement results in more normal facial proportions than Le Fort III distraction.\n\n\nMETHODS\nComputed tomographic scan analyses were performed before and after distraction in patients undergoing Le Fort III distraction (n = 5) and Le Fort II distraction with simultaneous zygomatic repositioning (n = 4). The calculated axial facial ratios and vertical facial ratios relative to the skull base were compared to those of unoperated Crouzon (n = 5) and normal (n = 6) controls.\n\n\nRESULTS\nWith Le Fort III distraction, facial ratios did not change with surgery and remained lower (p < 0.01; paired t test comparison) than normal and Crouzon controls. Although the face was advanced, its shape remained abnormal. With the Le Fort II segmental movement procedure, the central face advanced and lengthened more than the lateral orbit. This differential movement changed the abnormal facial ratios that were present before surgery into ratios that were not significantly different from normal controls (p > 0.05).\n\n\nCONCLUSION\nCompared with Le Fort III distraction, Le Fort II distraction with simultaneous zygomatic repositioning normalizes the position and the shape of the Apert face.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, III.",
"title": ""
},
{
"docid": "e6610d23c69a140fdf07d1ee2e58c8a1",
"text": "Purpose – The purpose of this paper is to contribute to the body of knowledge about to what extent integrated information systems, such as ERP and SEM systems, affect the ability to solve different management accounting tasks. Design/methodology/approach – The relationship between IIS and management accounting practices was investigated quantitatively. A total of 349 responses were collected using a survey, and the data were analysed using linear regression models. Findings – Analyses indicate that ERP systems support the data collection and the organisational breadth of management accounting better than SEM systems. SEM systems, on the other hand, seem to be better at supporting reporting and analysis. In addition, modern management accounting techniques involving the use of non-financial data are better supported by an SEM system. This indicates that different management accounting tasks are supported by different parts of the IIS. Research limitations/implications – The study applies the methods of quantitative research. Thus, the internal validity is threatened. Conducting in-depth studies might be able to reduce this possible shortcoming. Practical implications – On the basis of the findings, there is a need to consider the potential of closer integration of ERP and SEM systems in order to solve management accounting tasks. Originality/value – This paper adds to the limited body of knowledge about the relationship between IIS and management accounting practices.",
"title": ""
},
{
"docid": "3ce021aa52dac518e1437d397c63bf68",
"text": "Malaria is a common and sometimes fatal disease caused by infection with Plasmodium parasites. Cerebral malaria (CM) is a most severe complication of infection with Plasmodium falciparum parasites which features a complex immunopathology that includes a prominent neuroinflammation. The experimental mouse model of cerebral malaria (ECM) induced by infection with Plasmodium berghei ANKA has been used abundantly to study the role of single genes, proteins and pathways in the pathogenesis of CM, including a possible contribution to neuroinflammation. In this review, we discuss the Plasmodium berghei ANKA infection model to study human CM, and we provide a summary of all host genetic effects (mapped loci, single genes) whose role in CM pathogenesis has been assessed in this model. Taken together, the reviewed studies document the many aspects of the immune system that are required for pathological inflammation in ECM, but also identify novel avenues for potential therapeutic intervention in CM and in diseases which feature neuroinflammation.",
"title": ""
},
{
"docid": "375ab5445e81c7982802bdb8b9cbd717",
"text": "Advances in healthcare have led to longer life expectancy and an aging population. The cost of caring for the elderly is rising progressively and threatens the economic well-being of many nations around the world. Instead of professional nursing facilities, many elderly people prefer living independently in their own homes. To enable the aging to remain active, this research explores the roles of technology in improving their quality of life while reducing the cost of healthcare to the elderly population. In particular, we propose a multi-agent service framework, called Context-Aware Service Integration System (CASIS), to integrate applications and services. This paper demonstrates several context-aware service scenarios these have been developed on the proposed framework to demonstrate how context technologies and mobile web services can help enhance the quality of care for an elder’s daily",
"title": ""
},
{
"docid": "e9e620742992a6b6aa50e6e0e5894b6f",
"text": "A significant amount of information in today’s world is stored in structured and semistructured knowledge bases. Efficient and simple methods to query these databases are essential and must not be restricted to only those who have expertise in formal query languages. The field of semantic parsing deals with converting natural language utterances to logical forms that can be easily executed on a knowledge base. In this survey, we examine the various components of a semantic parsing system and discuss prominent work ranging from the initial rule based methods to the current neural approaches to program synthesis. We also discuss methods that operate using varying levels of supervision and highlight the key challenges involved in the learning of such systems.",
"title": ""
},
{
"docid": "0b973f37e2d9c3d7f427b939db233f12",
"text": "Artificial intelligence (AI) generally and machine learning (ML) specifically demonstrate impressive practical success in many different application domains, e.g. in autonomous driving, speech recognition, or recommender systems. Deep learning approaches, trained on extremely large data sets or using reinforcement learning methods have even exceeded human performance in visual tasks, particularly on playing games such as Atari, or mastering the game of Go. Even in the medical domain there are remarkable results. However, the central problem of such models is that they are regarded as black-box models and even if we understand the underlying mathematical principles of such models they lack an explicit declarative knowledge representation, hence have difficulty in generating the underlying explanatory structures. This calls for systems enabling to make decisions transparent, understandable and explainable. A huge motivation for our approach are rising legal and privacy aspects. The new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May 25th 2018, will make black-box approaches difficult to use in business. This does not imply a ban on automatic learning approaches or an obligation to explain everything all the time, however, there must be a possibility to make the results re-traceable on demand. This is beneficial, e.g. for general understanding, for teaching, for learning, for research, and it can be helpful in court. In this paper we outline some of our research topics in the context of the relatively new area of explainable-AI with a focus on the application in medicine, which is a very special domain. This is due to the fact that medical professionals are working mostly with distributed heterogeneous and complex sources of data. In this paper we concentrate on three sources: images, *omics data and text. We argue that research in explainable-AI would generally help to facilitate the implementation of AI/ML in the medical domain, and specifically help to facilitate transparency and trust.",
"title": ""
}
] | scidocsrr |
d211f8d25ed48575a3f39ca00c42ea4c | Managing Non-Volatile Memory in Database Systems | [
{
"docid": "149b1f7861d55e90b1f423ff98e765ca",
"text": "The advent of Storage Class Memory (SCM) is driving a rethink of storage systems towards a single-level architecture where memory and storage are merged. In this context, several works have investigated how to design persistent trees in SCM as a fundamental building block for these novel systems. However, these trees are significantly slower than DRAM-based counterparts since trees are latency-sensitive and SCM exhibits higher latencies than DRAM. In this paper we propose a novel hybrid SCM-DRAM persistent and concurrent B-Tree, named Fingerprinting Persistent Tree (FPTree) that achieves similar performance to DRAM-based counterparts. In this novel design, leaf nodes are persisted in SCM while inner nodes are placed in DRAM and rebuilt upon recovery. The FPTree uses Fingerprinting, a technique that limits the expected number of in-leaf probed keys to one. In addition, we propose a hybrid concurrency scheme for the FPTree that is partially based on Hardware Transactional Memory. We conduct a thorough performance evaluation and show that the FPTree outperforms state-of-the-art persistent trees with different SCM latencies by up to a factor of 8.2. Moreover, we show that the FPTree scales very well on a machine with 88 logical cores. Finally, we integrate the evaluated trees in memcached and a prototype database. We show that the FPTree incurs an almost negligible performance overhead over using fully transient data structures, while significantly outperforming other persistent trees.",
"title": ""
}
] | [
{
"docid": "20436a21b4105700d7e95a477a22d830",
"text": "We introduce a new type of Augmented Reality games: By using a simple webcam and Computer Vision techniques, we turn a standard real game board pawns into an AR game. We use these objects as a tangible interface, and augment them with visual effects. The game logic can be performed automatically by the computer. This results in a better immersion compared to the original board game alone and provides a different experience than a video game. We demonstrate our approach on Monopoly− [1], but it is very generic and could easily be adapted to any other board game.",
"title": ""
},
{
"docid": "467bb4ffb877b4e21ad4f7fc7adbd4a6",
"text": "In this paper, a 6 × 6 planar slot array based on a hollow substrate integrated waveguide (HSIW) is presented. To eliminate the tilting of the main beam, the slot array is fed from the centre at the back of the HSIW, which results in a blockage area. To reduce the impact on sidelobe levels, a slot extrusion technique is introduced. A simplified multiway power divider is demonstrated to feed the array elements and the optimisation procedure is described. To verify the antenna design, a 6 × 6 planar array is fabricated and measured in a low temperature co-fired ceramic (LTCC) technology. The HSIW has lower loss, comparable to standard WR28, and a high gain of 17.1 dBi at 35.5 GHz has been achieved in the HSIW slot array.",
"title": ""
},
{
"docid": "572453e5febc5d45be984d7adb5436c5",
"text": "An analysis of several role playing games indicates that player quests share common elements, and that these quests may be abstractly represented using a small expressive language. One benefit of this representation is that it can guide procedural content generation by allowing quests to be generated using this abstraction, and then later converting them into a concrete form within a game’s domain.",
"title": ""
},
{
"docid": "539fb99a52838d6ce6f980b9b9703a2b",
"text": "The Blinder-Oaxaca decomposition technique is widely used to identify and quantify the separate contributions of differences in measurable characteristics to group differences in an outcome of interest. The use of a linear probability model and the standard BlinderOaxaca decomposition, however, can provide misleading estimates when the dependent variable is binary, especially when group differences are very large for an influential explanatory variable. A simulation method of performing a nonlinear decomposition that uses estimates from a logit, probit or other nonlinear model was first developed in a Journal of Labor Economics article (Fairlie 1999). This nonlinear decomposition technique has been used in nearly a thousand subsequent studies published in a wide range of fields and disciplines. In this paper, I address concerns over path dependence in using the nonlinear decomposition technique. I also present a straightforward method of incorporating sample weights in the technique. I thank Eric Aldrich and Ben Jann for comments and suggestions, and Brandon Heck for research assistance.",
"title": ""
},
{
"docid": "590e0965ca61223d5fefb82e89f24fd0",
"text": "Large software projects contain significant code duplication, mainly due to copying and pasting code. Many techniques have been developed to identify duplicated code to enable applications such as refactoring, detecting bugs, and protecting intellectual property. Because source code is often unavailable, especially for third-party software, finding duplicated code in binaries becomes particularly important. However, existing techniques operate primarily on source code, and no effective tool exists for binaries.\n In this paper, we describe the first practical clone detection algorithm for binary executables. Our algorithm extends an existing tree similarity framework based on clustering of characteristic vectors of labeled trees with novel techniques to normalize assembly instructions and to accurately and compactly model their structural information. We have implemented our technique and evaluated it on Windows XP system binaries totaling over 50 million assembly instructions. Results show that it is both scalable and precise: it analyzed Windows XP system binaries in a few hours and produced few false positives. We believe our technique is a practical, enabling technology for many applications dealing with binary code.",
"title": ""
},
{
"docid": "a4a15096e116a6afc2730d1693b1c34f",
"text": "The present study reports on the construction of a dimensional measure of gender identity (gender dysphoria) for adolescents and adults. The 27-item gender identity/gender dysphoria questionnaire for adolescents and adults (GIDYQ-AA) was administered to 389 university students (heterosexual and nonheterosexual) and 73 clinic-referred patients with gender identity disorder. Principal axis factor analysis indicated that a one-factor solution, accounting for 61.3% of the total variance, best fits the data. Factor loadings were all >or= .30 (median, .82; range, .34-.96). A mean total score (Cronbach's alpha, .97) was computed, which showed strong evidence for discriminant validity in that the gender identity patients had significantly more gender dysphoria than both the heterosexual and nonheterosexual university students. Using a cut-point of 3.00, we found the sensitivity was 90.4% for the gender identity patients and specificity was 99.7% for the controls. The utility of the GIDYQ-AA is discussed.",
"title": ""
},
{
"docid": "82234158dc94216222efa5f80eee0360",
"text": "We investigate the possibility to prove security of the well-known blind signature schemes by Chaum, and by Pointcheval and Stern in the standard model, i.e., without random oracles. We subsume these schemes under a more general class of blind signature schemes and show that finding security proofs for these schemes via black-box reductions in the standard model is hard. Technically, our result deploys meta-reduction techniques showing that black-box reductions for such schemes could be turned into efficient solvers for hard non-interactive cryptographic problems like RSA or discrete-log. Our technique yields significantly stronger impossibility results than previous meta-reductions in other settings by playing off the two security requirements of the blind signatures (unforgeability and blindness).",
"title": ""
},
{
"docid": "d0985c38f3441ca0d69af8afaf67c998",
"text": "In this paper we discuss the importance of ambiguity, uncertainty and limited information on individuals’ decision making in situations that have an impact on their privacy. We present experimental evidence from a survey study that demonstrates the impact of framing a marketing offer on participants’ willingness to accept when the consequences of the offer are uncertain and highly ambiguous.",
"title": ""
},
{
"docid": "96c1f90ff04e7fd37d8b8a16bc4b9c54",
"text": "Graph triangulation, which finds all triangles in a graph, has been actively studied due to its wide range of applications in the network analysis and data mining. With the rapid growth of graph data size, disk-based triangulation methods are in demand but little researched. To handle a large-scale graph which does not fit in memory, we must iteratively load small parts of the graph. In the existing literature, achieving the ideal cost has been considered to be impossible for billion-scale graphs due to the memory size constraint. In this paper, we propose an overlapped and parallel disk-based triangulation framework for billion-scale graphs, OPT, which achieves the ideal cost by (1) full overlap of the CPU and I/O operations and (2) full parallelism of multi-core CPU and FlashSSD I/O. In OPT, triangles in memory are called the internal triangles while triangles constituting vertices in memory and vertices in external memory are called the external triangles. At the macro level, OPT overlaps the internal triangulation and the external triangulation, while it overlaps the CPU and I/O operations at the micro level. Thereby, the cost of OPT is close to the ideal cost. Moreover, OPT instantiates both vertex-iterator and edge-iterator models and benefits from multi-thread parallelism on both types of triangulation. Extensive experiments conducted on large-scale datasets showed that (1) OPT achieved the elapsed time close to that of the ideal method with less than 7% of overhead under the limited memory budget, (2) OPT achieved linear speed-up with an increasing number of CPU cores, (3) OPT outperforms the state-of-the-art parallel method by up to an order of magnitude with 6 CPU cores, and (4) for the first time in the literature, the triangulation results are reported for a billion-vertex scale real-world graph.",
"title": ""
},
{
"docid": "6a33013c19dc59d8871e217461d479e9",
"text": "Cancer tissues in histopathology images exhibit abnormal patterns; it is of great clinical importance to label a histopathology image as having cancerous regions or not and perform the corresponding image segmentation. However, the detailed annotation of cancer cells is often an ambiguous and challenging task. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL), to classify, segment and cluster cancer cells in colon histopathology images. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), pixel-level segmentation (cancer vs. non-cancer tissue), and patch-level clustering (cancer subclasses). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to perform the above three tasks in an integrated framework. Experimental results demonstrate the efficiency and effectiveness of MCIL in analyzing colon cancers.",
"title": ""
},
{
"docid": "b32286014bb7105e62fba85a9aab9019",
"text": "PURPOSE\nSystemic thrombolysis for the treatment of acute pulmonary embolism (PE) carries an estimated 20% risk of major hemorrhage, including a 3%-5% risk of hemorrhagic stroke. The authors used evidence-based methods to evaluate the safety and effectiveness of modern catheter-directed therapy (CDT) as an alternative treatment for massive PE.\n\n\nMATERIALS AND METHODS\nThe systematic review was initiated by electronic literature searches (MEDLINE, EMBASE) for studies published from January 1990 through September 2008. Inclusion criteria were applied to select patients with acute massive PE treated with modern CDT. Modern techniques were defined as the use of low-profile devices (< or =10 F), mechanical fragmentation and/or aspiration of emboli including rheolytic thrombectomy, and intraclot thrombolytic injection if a local drug was infused. Relevant non-English language articles were translated into English. Paired reviewers assessed study quality and abstracted data. Meta-analysis was performed by using random effects models to calculate pooled estimates for complications and clinical success rates across studies. Clinical success was defined as stabilization of hemodynamics, resolution of hypoxia, and survival to hospital discharge.\n\n\nRESULTS\nFive hundred ninety-four patients from 35 studies (six prospective, 29 retrospective) met the criteria for inclusion. The pooled clinical success rate from CDT was 86.5% (95% confidence interval [CI]: 82.1%, 90.2%). Pooled risks of minor and major procedural complications were 7.9% (95% CI: 5.0%, 11.3%) and 2.4% (95% CI: 1.9%, 4.3%), respectively. Data on the use of systemic thrombolysis before CDT were available in 571 patients; 546 of those patients (95%) were treated with CDT as the first adjunct to heparin without previous intravenous thrombolysis.\n\n\nCONCLUSIONS\nModern CDT is a relatively safe and effective treatment for acute massive PE. At experienced centers, CDT should be considered as a first-line treatment for patients with massive PE.",
"title": ""
},
{
"docid": "1ee444fda98b312b0462786f5420f359",
"text": "After years of banning consumer devices (e.g., iPads and iPhone) and applications (e.g., DropBox, Evernote, iTunes) organizations are allowing employees to use their consumer tools in the workplace. This IT consumerization phenomenon will have serious consequences on IT departments which have historically valued control, security, standardization and support (Harris et al. 2012). Based on case studies of three organizations in different stages of embracing IT consumerization, this study identifies the conflicts IT consumerization creates for IT departments. All three organizations experienced similar goal and behavior conflicts, while identity conflict varied depending upon the organizations’ stage implementing consumer tools (e.g., embryonic, initiating or institutionalized). Theoretically, this study advances IT consumerization research by applying a role conflict perspective to understand consumerization’s impact on the IT department.",
"title": ""
},
{
"docid": "da9432171ceba5ae76fa76a8416b1a8f",
"text": "Social tagging on online portals has become a trend now. It has emerged as one of the best ways of associating metadata with web objects. With the increase in the kinds of web objects becoming available, collaborative tagging of such objects is also developing along new dimensions. This popularity has led to a vast literature on social tagging. In this survey paper, we would like to summarize different techniques employed to study various aspects of tagging. Broadly, we would discuss about properties of tag streams, tagging models, tag semantics, generating recommendations using tags, visualizations of tags, applications of tags and problems associated with tagging usage. We would discuss topics like why people tag, what influences the choice of tags, how to model the tagging process, kinds of tags, different power laws observed in tagging domain, how tags are created, how to choose the right tags for recommendation, etc. We conclude with thoughts on future work in the area.",
"title": ""
},
{
"docid": "318aa0dab44cca5919100033aa692cd9",
"text": "Text classification is one of the important research issues in the field of text mining, where the documents are classified with supervised knowledge. In literature we can find many text representation schemes and classifiers/learning algorithms used to classify text documents to the predefined categories. In this paper, we present various text representation schemes and compare different classifiers used to classify text documents to the predefined classes. The existing methods are compared and contrasted based on qualitative parameters viz., criteria used for classification, algorithms adopted and classification time complexities.",
"title": ""
},
{
"docid": "709853992cae8d5b5fa4c3cc86d898a7",
"text": "The rise of big data age in the Internet has led to the explosive growth of data size. However, trust issue has become the biggest problem of big data, leading to the difficulty in data safe circulation and industry development. The blockchain technology provides a new solution to this problem by combining non-tampering, traceable features with smart contracts that automatically execute default instructions. In this paper, we present a credible big data sharing model based on blockchain technology and smart contract to ensure the safe circulation of data resources.",
"title": ""
},
{
"docid": "c5f521d5e5e089261914f6784e2d77da",
"text": "Generating structured query language (SQL) from natural language is an emerging research topic. This paper presents a new learning paradigm from indirect supervision of the answers to natural language questions, instead of SQL queries. This paradigm facilitates the acquisition of training data due to the abundant resources of question-answer pairs for various domains in the Internet, and expels the difficult SQL annotation job. An endto-end neural model integrating with reinforcement learning is proposed to learn SQL generation policy within the answerdriven learning paradigm. The model is evaluated on datasets of different domains, including movie and academic publication. Experimental results show that our model outperforms the baseline models.",
"title": ""
},
{
"docid": "0ccfbd8f2b8979ec049d94fa6dddf614",
"text": "Using mobile games in education combines situated and active learning with fun in a potentially excellent manner. The effects of a mobile city game called Frequency 1550, which was developed by The Waag Society to help pupils in their first year of secondary education playfully acquire historical knowledge of medieval Amsterdam, were investigated in terms of pupil engagement in the game, historical knowledge, and motivation for History in general and the topic of the Middle Ages in particular. A quasi-experimental design was used with 458 pupils from 20 classes from five schools. The pupils in 10 of the classes played the mobile history game whereas the pupils in the other 10 classes received a regular, project-based lesson series. The results showed those pupils who played the game to be engaged and to gain significantly more knowledge about medieval Amsterdam than those pupils who received regular projectbased instruction. No significant differences were found between the two groups with respect to motivation for History or the MiddleAges. The impact of location-based technology and gamebased learning on pupil knowledge and motivation are discussed along with suggestions for future research.",
"title": ""
},
{
"docid": "9415adaa3ec2f7873a23cc2017a2f1ee",
"text": "In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.",
"title": ""
},
{
"docid": "50875a63d0f3e1796148d809b5673081",
"text": "Coreference resolution seeks to find the mentions in text that refer to the same real-world entity. This task has been well-studied in NLP, but until recent years, empirical results have been disappointing. Recent research has greatly improved the state-of-the-art. In this review, we focus on five papers that represent the current state-ofthe-art and discuss how they relate to each other and how these advances will influence future work in this area.",
"title": ""
}
] | scidocsrr |
3387d0ddea6ff80834f71a31b8234ee0 | The Scyther Tool: Verification, Falsification, and Analysis of Security Protocols | [
{
"docid": "7d634a9abe92990de8cb41a78c25d2cc",
"text": "We present a new automatic cryptographic protocol verifier based on a simple representation of the protocol by Prolog rules, and on a new efficient algorithm that determines whether a fact can be proved from these rules or not. This verifier proves secrecy properties of the protocols. Thanks to its use of unification, it avoids the problem of the state space explosion. Another advantage is that we do not need to limit the number of runs of the protocol to analyze it. We have proved the correctness of our algorithm, and have implemented it. The experimental results show that many examples of protocols of the literature, including Skeme [24], can be analyzed by our tool with very small resources: the analysis takes from less than 0.1 s for simple protocols to 23 s for the main mode of Skeme. It uses less than 2 Mb of memory in our tests.",
"title": ""
}
] | [
{
"docid": "8d0221daae5933760698b8f4f7943870",
"text": "We introduce a novel, online method to predict pedestrian trajectories using agent-based velocity-space reasoning for improved human-robot interaction and collision-free navigation. Our formulation uses velocity obstacles to model the trajectory of each moving pedestrian in a robot’s environment and improves the motion model by adaptively learning relevant parameters based on sensor data. The resulting motion model for each agent is computed using statistical inferencing techniques, including a combination of Ensemble Kalman filters and a maximum-likelihood estimation algorithm. This allows a robot to learn individual motion parameters for every agent in the scene at interactive rates. We highlight the performance of our motion prediction method in real-world crowded scenarios, compare its performance with prior techniques, and demonstrate the improved accuracy of the predicted trajectories. We also adapt our approach for collision-free robot navigation among pedestrians based on noisy data and highlight the results in our simulator.",
"title": ""
},
{
"docid": "57c705e710f99accab3d9242fddc5ac8",
"text": "Although much research has been conducted in the area of organizational commitment, few studies have explicitly examined how organizations facilitate commitment among members. Using a sample of 291 respondents from 45 firms, the results of this study show that rigorous recruitment and selection procedures and a strong, clear organizational value system are associated with higher levels of employee commitment based on internalization and identification. Strong organizational career and reward systems are related to higher levels of instrumental or compliance-based commitment.",
"title": ""
},
{
"docid": "605201e9b3401149da7e0e22fdbc908b",
"text": "Roadway traffic safety is a major concern for transportation governing agencies as well as ordinary citizens. In order to give safe driving suggestions, careful analysis of roadway traffic data is critical to find out variables that are closely related to fatal accidents. In this paper we apply statistics analysis and data mining algorithms on the FARS Fatal Accident dataset as an attempt to address this problem. The relationship between fatal rate and other attributes including collision manner, weather, surface condition, light condition, and drunk driver were investigated. Association rules were discovered by Apriori algorithm, classification model was built by Naive Bayes classifier, and clusters were formed by simple K-means clustering algorithm. Certain safety driving suggestions were made based on statistics, association rules, classification model, and clusters obtained.",
"title": ""
},
{
"docid": "9d82ce8e6630a9432054ed97752c7ec6",
"text": "Development is the powerful process involving a genome in the transformation from one egg cell to a multicellular organism with many cell types. The dividing cells manage to organize and assign themselves special, differentiated roles in a reliable manner, creating a spatio-temporal pattern and division of labor. This despite the fact that little positional information may be available to them initially to guide this patterning. Inspired by a model of developmental biologist L. Wolpert, we simulate this situation in an evolutionary setting where individuals have to grow into “French flag” patterns. The cells in our model exist in a 2-layer Potts model physical environment. Controlled by continuous genetic regulatory networks, identical for all cells of one individual, the cells can individually differ in parameters including target volume, shape, orientation, and diffusion. Intercellular communication is possible via secretion and sensing of diffusing morphogens. Evolved individuals growing from a single cell can develop the French flag pattern by setting up and maintaining asymmetric morphogen gradients – a behavior predicted by several theoretical models.",
"title": ""
},
{
"docid": "d639f6b922e24aca7229ce561e852b31",
"text": "As digital video becomes more pervasive, e cient ways of searching and annotating video according to content will be increasingly important. Such tasks arise, for example, in the management of digital video libraries for content-based retrieval and browsing. In this paper, we develop tools based on camera motion for analyzing and annotating a class of structured video using the low-level information available directly from MPEG compressed video. In particular, we show that in certain structured settings it is possible to obtain reliable estimates of camera motion by directly processing data easily obtained from the MPEG format. Working directly with the compressed video greatly reduces the processing time and enhances storage e ciency. As an illustration of this idea, we have developed a simple basketball annotation system which combines the low-level information extracted from an MPEG stream with the prior knowledge of basketball structure to provide high level content analysis, annotation and browsing for events such as wide-angle and close-up views, fast breaks, probable shots at the basket, etc. The methods used in this example should also be useful in the analysis of high-level content of structured video in other domains.",
"title": ""
},
{
"docid": "60697a4e8dd7d13147482a0992ee1862",
"text": "Static analysis of JavaScript has proven useful for a variety of purposes, including optimization, error checking, security auditing, program refactoring, and more. We propose a technique called type refinement that can improve the precision of such static analyses for JavaScript without any discernible performance impact. Refinement is a known technique that uses the conditions in branch guards to refine the analysis information propagated along each branch path. The key insight of this paper is to recognize that JavaScript semantics include many implicit conditional checks on types, and that performing type refinement on these implicit checks provides significant benefit for analysis precision.\n To demonstrate the effectiveness of type refinement, we implement a static analysis tool for reporting potential type-errors in JavaScript programs. We provide an extensive empirical evaluation of type refinement using a benchmark suite containing a variety of JavaScript application domains, ranging from the standard performance benchmark suites (Sunspider and Octane), to open-source JavaScript applications, to machine-generated JavaScript via Emscripten. We show that type refinement can significantly improve analysis precision by up to 86% without affecting the performance of the analysis.",
"title": ""
},
{
"docid": "9489210bfc8884d8290f772996629095",
"text": "Semantic interaction techniques in visual data analytics allow users to indirectly adjust model parameters by directly manipulating the visual output of the models. Many existing tools that support semantic interaction do so with a number of similar features, including using an underlying bidirectional pipeline, using a series of statistical models, and performing inverse computations to transform user interactions into model updates. We propose a visual analytics pipeline that captures these necessary features of semantic interactions. Our flexible, multi-model, bidirectional pipeline has modular functionality to enable rapid prototyping. This enables quick alterations to the type of data being visualized, models for transforming the data, semantic interaction methods, and visual encodings. To demonstrate how this pipeline can be used, we developed a series of applications that employ semantic interactions. We also discuss how the pipeline can be used or extended for future research on semantic interactions in visual analytics.",
"title": ""
},
{
"docid": "ac86e950866646a0b86d76bb3c087d0a",
"text": "In this paper, an SVM-based approach is proposed for stock market trend prediction. The proposed approach consists of two parts: feature selection and prediction model. In the feature selection part, a correlation-based SVM filter is applied to rank and select a good subset of financial indexes. And the stock indicators are evaluated based on the ranking. In the prediction model part, a so called quasi-linear SVM is applied to predict stock market movement direction in term of historical data series by using the selected subset of financial indexes as the weighted inputs. The quasi-linear SVM is an SVM with a composite quasi-linear kernel function, which approximates a nonlinear separating boundary by multi-local linear classifiers with interpolation. Experimental results on Taiwan stock market datasets demonstrate that the proposed SVM-based stock market trend prediction method produces better generalization performance over the conventional methods in terms of the hit ratio. Moreover, the experimental results also show that the proposed SVM-based stock market trend prediction system can find out a good subset and evaluate stock indicators which provide useful information for investors.",
"title": ""
},
{
"docid": "22e559b9536b375ded6516ceb93652ef",
"text": "In this paper we explore the linguistic components of toxic behavior by using crowdsourced data from over 590 thousand cases of accused toxic players in a popular match-based competition game, League of Legends. We perform a series of linguistic analyses to gain a deeper understanding of the role communication plays in the expression of toxic behavior. We characterize linguistic behavior of toxic players and compare it with that of typical players in an online competition game. We also find empirical support describing how a player transitions from typical to toxic behavior. Our findings can be helpful to automatically detect and warn players who may become toxic and thus insulate potential victims from toxic playing in advance.",
"title": ""
},
{
"docid": "5679a329a132125d697369ca4d39b93e",
"text": "This paper proposes a method to explore the design space of FinFETs with double fin heights. Our study shows that if one fin height is sufficiently larger than the other and the greatest common divisor of their equivalent transistor widths is small, the fin height pair will incur less width quantization effect and lead to better area efficiency. We design a standard cell library based on this technology using a tailored FreePDK15. With respect to a standard cell library designed with FreePDK15, about 86% of the cells designed with FinFETs of double fin heights have a smaller delay and 54% of the cells take a smaller area. We also demonstrate the advantages of FinFETs with double fin heights through chip designs using our cell library.",
"title": ""
},
{
"docid": "dca6d14c168f0836411df562444e71c5",
"text": "Obesity is a growing global health concern, with a rapid increase being observed in morbid obesity. Obesity is associated with an increased cardiovascular risk and earlier onset of cardiovascular morbidity. The growing obesity epidemic is a major source of unsustainable health costs and morbidity and mortality because of hypertension, type 2 diabetes mellitus, dyslipidemia, certain cancers and major cardiovascular diseases. Similar to obesity, hypertension is a key unfavorable health metric that has disastrous health implications: currently, hypertension is the leading contributor to global disease burden, and the direct and indirect costs of treating hypertension are exponentially higher. Poor lifestyle characteristics and health metrics often cluster together to create complex and difficult-to-treat phenotypes: excess body mass is such an example, facilitating a cascade of pathophysiological sequelae that create such as a direct obesity–hypertension link, which consequently increases cardiovascular risk. Although some significant issues regarding assessment/management of obesity remain to be addressed and the underlying mechanisms governing these disparate effects of obesity on cardiovascular disease are complex and not completely understood, a variety of factors could have a critical role. Consequently, a comprehensive and exhaustive investigation of this relationship should analyze the pathogenetic factors and pathophysiological mechanisms linking obesity to hypertension as they provide the basis for a rational therapeutic strategy in the aim to fully describe and understand the obesity–hypertension link and discuss strategies to address the potential negative consequences from the perspective of both primordial prevention and treatment for those already impacted by this condition.",
"title": ""
},
{
"docid": "be76c7f877ad43668fe411741478c43b",
"text": "With the surging of smartphone sensing, wireless networking, and mobile social networking techniques, Mobile Crowd Sensing and Computing (MCSC) has become a promising paradigm for cross-space and large-scale sensing. MCSC extends the vision of participatory sensing by leveraging both participatory sensory data from mobile devices (offline) and user-contributed data from mobile social networking services (online). Further, it explores the complementary roles and presents the fusion/collaboration of machine and human intelligence in the crowd sensing and computing processes. This article characterizes the unique features and novel application areas of MCSC and proposes a reference framework for building human-in-the-loop MCSC systems. We further clarify the complementary nature of human and machine intelligence and envision the potential of deep-fused human--machine systems. We conclude by discussing the limitations, open issues, and research opportunities of MCSC.",
"title": ""
},
{
"docid": "bd5e127cc3454bbf8a89c3f7d66fd624",
"text": "Mobile ad hoc networking (MANET) has become an exciting and important technology in recent years because of the rapid proliferation of wireless devices. MANETs are highly vulnerable to attacks due to the open medium, dynamically changing network topology, cooperative algorithms, lack of centralized monitoring and management point, and lack of a clear line of defense. In this paper, we report our progress in developing intrusion detection (ID) capabilities for MANET. Building on our prior work on anomaly detection, we investigate how to improve the anomaly detection approach to provide more details on attack types and sources. For several well-known attacks, we can apply a simple rule to identify the attack type when an anomaly is reported. In some cases, these rules can also help identify the attackers. We address the run-time resource constraint problem using a cluster-based detection scheme where periodically a node is elected as the ID agent for a cluster. Compared with the scheme where each node is its own ID agent, this scheme is much more efficient while maintaining the same level of effectiveness. We have conducted extensive experiments using the ns-2 and MobiEmu environments to validate our research.",
"title": ""
},
{
"docid": "1e8acf321f7ff3a1a496e4820364e2a8",
"text": "The liver is a central regulator of metabolism, and liver failure thus constitutes a major health burden. Understanding how this complex organ develops during embryogenesis will yield insights into how liver regeneration can be promoted and how functional liver replacement tissue can be engineered. Recent studies of animal models have identified key signaling pathways and complex tissue interactions that progressively generate liver progenitor cells, differentiated lineages and functional tissues. In addition, progress in understanding how these cells interact, and how transcriptional and signaling programs precisely coordinate liver development, has begun to elucidate the molecular mechanisms underlying this complexity. Here, we review the lineage relationships, signaling pathways and transcriptional programs that orchestrate hepatogenesis.",
"title": ""
},
{
"docid": "147b207125fcda1dece25a6c5cd17318",
"text": "In this paper we present a neural network based system for automated e-mail filing into folders and antispam filtering. The experiments show that it is more accurate than several other techniques. We also investigate the effects of various feature selection, weighting and normalization methods, and also the portability of the anti-spam filter across different users.",
"title": ""
},
{
"docid": "d2c0e71db2957621eca42bdc221ffb8f",
"text": "Financial time sequence analysis has been a popular research topic in the field of finance, data science and machine learning. It is a highly challenging due to the extreme complexity within the sequences. Mostly existing models are failed to capture its intrinsic information, factor and tendency. To improve the previous approaches, in this paper, we propose a Hidden Markov Model (HMMs) based approach to analyze the financial time sequence. The fluctuation of financial time sequence was predicted through introducing a dual-state HMMs. Dual-state HMMs models the sequence and produces the features which will be delivered to SVMs for prediction. Note that we cast a financial time sequence prediction problem to a classification problem. To evaluate the proposed approach, we use Shanghai Composite Index as the dataset for empirically experiments. The dataset was collected from 550 consecutive trading days, and is randomly split to the training set and test set. The extensively experimental results show that: when analyzing financial time sequence, the mean-square error calculated with HMMs was obviously smaller error than the compared GARCH approach. Therefore, when using HMM to predict the fluctuation of financial time sequence, it achieves higher accuracy and exhibits several attractive advantageous over GARCH approach.",
"title": ""
},
{
"docid": "1c0eaeea7e1bfc777bb6e391eb190b59",
"text": "We review machine learning (ML)-based optical performance monitoring (OPM) techniques in optical communications. Recent applications of ML-assisted OPM in different aspects of fiber-optic networking including cognitive fault detection and management, network equipment failure prediction, and dynamic planning and optimization of software-defined networks are also discussed.",
"title": ""
},
{
"docid": "6c62e51d723d523fa286e94d3037a76f",
"text": "Stochastic programming can effectively describe many deci sion making problems in uncertain environments. Unfortunately, such programs are often computationally demanding to solve. In addition, their solution can be misleading when there is ambiguity in the choice of a distribution for the ran dom parameters. In this paper, we propose a model that describes uncertainty in both the distribution form (discr ete, Gaussian, exponential, etc.) and moments (mean and cov ariance matrix). We demonstrate that for a wide range of cost fun ctio s the associated distributionally robust (or min-max ) stochastic program can be solved efficiently. Furthermore, by deriving a new confidence region for the mean and the covariance matrix of a random vector, we provide probabilis tic arguments for using our model in problems that rely heavily on historical data. These arguments are confirmed in a pra ctical example of portfolio selection, where our framework leads to better performing policies on the “true” distribut on underlying the daily returns of financial assets.",
"title": ""
},
{
"docid": "2da214ec8cd7e2380c0ee17adc3ad9fb",
"text": "Machine intelligence is an important problem to be solved for artificial intelligence to be truly impactful in our lives. While many question answering models have been explored for existing machine comprehension datasets, there has been little work with the newly released MS Marco dataset, which poses many unique challenges. We explore an end-to-end neural architecture with attention mechanisms capable of comprehending relevant information and generating text answers for MS Marco.",
"title": ""
},
{
"docid": "10fd41c0ff246545ceab663b9d9b3853",
"text": "Because structural equation modeling (SEM) has become a very popular data-analytic technique, it is important for clinical scientists to have a balanced perception of its strengths and limitations. We review several strengths of SEM, with a particular focus on recent innovations (e.g., latent growth modeling, multilevel SEM models, and approaches for dealing with missing data and with violations of normality assumptions) that underscore how SEM has become a broad data-analytic framework with flexible and unique capabilities. We also consider several limitations of SEM and some misconceptions that it tends to elicit. Major themes emphasized are the problem of omitted variables, the importance of lower-order model components, potential limitations of models judged to be well fitting, the inaccuracy of some commonly used rules of thumb, and the importance of study design. Throughout, we offer recommendations for the conduct of SEM analyses and the reporting of results.",
"title": ""
}
] | scidocsrr |
16db2a19ce63b6b189aa6980cdbb1208 | Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization | [
{
"docid": "b14a77c6e663af1445e466a3e90d4e5f",
"text": "This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from humantranslated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-ofthe-art Transformer on English-German and Chinese-English translation tasks.",
"title": ""
}
] | [
{
"docid": "09beeeaf2d92087da10c5725bda10d2f",
"text": "We report a quantitative investigation of the visual identification and auditory comprehension deficits of 4 patients who had made a partial recovery from herpes simplex encephalitis. Clinical observations had suggested the selective impairment and selective preservation of certain categories of visual stimuli. In all 4 patients a significant discrepancy between their ability to identify inanimate objects and inability to identify living things and foods was demonstrated. In 2 patients it was possible to compare visual and verbal modalities and the same pattern of dissociation was observed in both. For 1 patient, comprehension of abstract words was significantly superior to comprehension of concrete words. Consistency of responses was recorded within a modality in contrast to a much lesser degree of consistency between modalities. We interpret our findings in terms of category specificity in the organization of meaning systems that are also modality specific semantic systems.",
"title": ""
},
{
"docid": "66fc8ff7073579314c50832a6f06c10d",
"text": "Endodontic management of the permanent immature tooth continues to be a challenge for both clinicians and researchers. Clinical concerns are primarily related to achieving adequate levels of disinfection as 'aggressive' instrumentation is contraindicated and hence there exists a much greater reliance on endodontic irrigants and medicaments. The open apex has also presented obturation difficulties, notably in controlling length. Long-term apexification procedures with calcium hydroxide have proven to be successful in retaining many of these immature infected teeth but due to their thin dentinal walls and perceived problems associated with long-term placement of calcium hydroxide, they have been found to be prone to cervical fracture and subsequent tooth loss. In recent years there has developed an increasing interest in the possibility of 'regenerating' pulp tissue in an infected immature tooth. It is apparent that although the philosophy and hope of 'regeneration' is commendable, recent histologic studies appear to suggest that the calcified material deposited on the canal wall is bone/cementum rather than dentine, hence the absence of pulp tissue with or without an odontoblast layer.",
"title": ""
},
{
"docid": "eb83f7367ba11bb5582864a08bb746ff",
"text": "Probabilistic inference algorithms for find ing the most probable explanation, the max imum aposteriori hypothesis, and the maxi mum expected utility and for updating belief are reformulated as an elimination-type al gorithm called bucket elimination. This em phasizes the principle common to many of the algorithms appearing in that literature and clarifies their relationship to nonserial dynamic programming algorithms. We also present a general way of combining condition ing and elimination within this framework. Bounds on complexity are given for all the al gorithms as a function of the problem's struc ture.",
"title": ""
},
{
"docid": "48fc7aabdd36ada053ebc2d2a1c795ae",
"text": "The Value-Based Software Engineering (VBSE) agenda described in the preceding article has the objectives of integrating value considerations into current and emerging software engineering principles and practices, and of developing an overall framework in which they compatibly reinforce each other. In this paper, we provide a case study illustrating some of the key VBSE practices, and focusing on a particular anomaly in the monitoring and control area: the \"Earned Value Management System.\" This is a most useful technique for monitoring and controlling the cost, schedule, and progress of a complex project. But it has absolutely nothing to say about the stakeholder value of the system being developed. The paper introduces an example order-processing software project, and shows how the use of Benefits Realization Analysis, stake-holder value proposition elicitation and reconciliation, and business case analysis provides a framework for stakeholder-earned-value monitoring and control.",
"title": ""
},
{
"docid": "8cb3aed5fab2f5d54195b0e4c2a9a4c6",
"text": "This paper describes a tri-modal asymmetric bidirectional differential memory interface that supports data rates of up to 20 Gbps over 3\" FR4 PCB channels while achieving power efficiency of 6.1 mW/Gbps at full speed. The interface also accommodates single-ended standard DDR3 and GDDR5 signaling at 1.6-Gbps and 6.4-Gbps operations, respectively, without package change. The compact, low-power and high-speed tri-modal interface is enabled by substantial reuse of the circuit elements among various signaling modes, particularly in the wide-band clock generation and distribution system and the multi-modal driver output stage, as well as the use of fast equalization for post-cursor intersymbol interference (ISI) mitigation. In the high-speed differential mode, the system utilizes a 1-tap transmit equalizer during a WRITE operation to the memory. In contrast, during a memory READ operation, it employs a linear equalizer (LEQ) with 3 dB of peaking as well as a calibrated high-speed 1-tap predictive decision feedback equalizer (prDFE), while no transmitter equalization is assumed for the memory. The prototype tri-modal interface implemented in a 40-nm CMOS process, consists of 16 data links and achieves more than 2.5 × energy-efficient memory transactions at 16 Gbps compared to a previous single-mode generation.",
"title": ""
},
{
"docid": "9464f2e308b5c8ab1f2fac1c008042c0",
"text": "Data governance has become a significant approach that drives decision making in public organisations. Thus, the loss of data governance is a concern to decision makers, acting as a barrier to achieving their business plans in many countries and also influencing both operational and strategic decisions. The adoption of cloud computing is a recent trend in public sector organisations, that are looking to move their data into the cloud environment. The literature shows that data governance is one of the main concerns of decision makers who are considering adopting cloud computing; it also shows that data governance in general and for cloud computing in particular is still being researched and requires more attention from researchers. However, in the absence of a cloud data governance framework, this paper seeks to develop a conceptual framework for cloud data governance-driven decision making in the public sector.",
"title": ""
},
{
"docid": "dd0562e604e6db2c31132f1ffcd94d4f",
"text": "a r t i c l e i n f o Keywords: Data quality Utility Cost–benefit analysis Data warehouse CRM Managing data resources at high quality is usually viewed as axiomatic. However, we suggest that, since the process of improving data quality should attempt to maximize economic benefits as well, high data quality is not necessarily economically-optimal. We demonstrate this argument by evaluating a microeconomic model that links the handling of data quality defects, such as outdated data and missing values, to economic outcomes: utility, cost, and net-benefit. The evaluation is set in the context of Customer Relationship Management (CRM) and uses large samples from a real-world data resource used for managing alumni relations. Within this context, our evaluation shows that all model parameters can be measured, and that all model-related assumptions are, largely, well supported. The evaluation confirms the assumption that the optimal quality level, in terms of maximizing net-benefits, is not necessarily the highest possible. Further, the evaluation process contributes some important insights for revising current data acquisition and maintenance policies. Maintaining data resources at a high quality level is a critical task in managing organizational information systems (IS). Data quality (DQ) significantly affects IS adoption and the success of data utilization [10,26]. Data quality management (DQM) has been examined from a variety of technical, functional, and organizational perspectives [22]. Achieving high quality is the primary objective of DQM efforts, and much research in DQM focuses on methodologies, tools and techniques for improving quality. Recent studies (e.g., [14,19]) have suggested that high DQ, although having clear merits, should not necessarily be the only objective to consider when assessing DQM alternatives, particularly in an IS that manages large datasets. As shown in these studies, maximizing economic benefits, based on the value gained from improving quality, and the costs involved in improving quality, may conflict with the target of achieving a high data quality level. Such findings inspire the need to link DQM decisions to economic outcomes and tradeoffs, with the goal of identifying more cost-effective DQM solutions. The quality of organizational data is rarely perfect as data, when captured and stored, may suffer from such defects as inaccuracies and missing values [22]. Its quality may further deteriorate as the real-world items that the data describes may change over time (e.g., a customer changing address, profession, and/or marital status). A plethora of studies have underscored the negative effect of low …",
"title": ""
},
{
"docid": "bdae3fb85df9de789a9faa2c08a5c0fb",
"text": "The rapid, exponential growth of modern electronics has brought about profound changes to our daily lives. However, maintaining the growth trend now faces significant challenges at both the fundamental and practical levels [1]. Possible solutions include More Moore?developing new, alternative device structures and materials while maintaining the same basic computer architecture, and More Than Moore?enabling alternative computing architectures and hybrid integration to achieve increased system functionality without trying to push the devices beyond limits. In particular, an increasing number of computing tasks today are related to handling large amounts of data, e.g. image processing as an example. Conventional von Neumann digital computers, with separate memory and processer units, become less and less efficient when large amount of data have to be moved around and processed quickly. Alternative approaches such as bio-inspired neuromorphic circuits, with distributed computing and localized storage in networks, become attractive options [2]?[6].",
"title": ""
},
{
"docid": "7f54157faf8041436174fa865d0f54a8",
"text": "The goal of robot learning from demonstra tion is to have a robot learn from watching a demonstration of the task to be performed In our approach to learning from demon stration the robot learns a reward function from the demonstration and a task model from repeated attempts to perform the task A policy is computed based on the learned reward function and task model Lessons learned from an implementation on an an thropomorphic robot arm using a pendulum swing up task include simply mimicking demonstrated motions is not adequate to per form this task a task planner can use a learned model and reward function to com pute an appropriate policy this model based planning process supports rapid learn ing both parametric and nonparametric models can be learned and used and in corporating a task level direct learning com ponent which is non model based in addi tion to the model based planner is useful in compensating for structural modeling errors and slow model learning",
"title": ""
},
{
"docid": "7882d2d18bc8a30a63e9fdb726c48ff1",
"text": "Flying ad-hoc networks (FANETs) are a very vibrant research area nowadays. They have many military and civil applications. Limited battery energy and the high mobility of micro unmanned aerial vehicles (UAVs) represent their two main problems, i.e., short flight time and inefficient routing. In this paper, we try to address both of these problems by means of efficient clustering. First, we adjust the transmission power of the UAVs by anticipating their operational requirements. Optimal transmission range will have minimum packet loss ratio (PLR) and better link quality, which ultimately save the energy consumed during communication. Second, we use a variant of the K-Means Density clustering algorithm for selection of cluster heads. Optimal cluster heads enhance the cluster lifetime and reduce the routing overhead. The proposed model outperforms the state of the art artificial intelligence techniques such as Ant Colony Optimization-based clustering algorithm and Grey Wolf Optimization-based clustering algorithm. The performance of the proposed algorithm is evaluated in term of number of clusters, cluster building time, cluster lifetime and energy consumption.",
"title": ""
},
{
"docid": "f7a2f86526209860d7ea89d3e7f2b576",
"text": "Natural Language Processing continues to grow in popularity in a range of research and commercial applications, yet managing the wide array of potential NLP components remains a difficult problem. This paper describes CURATOR, an NLP management framework designed to address some common problems and inefficiencies associated with building NLP process pipelines; and EDISON, an NLP data structure library in Java that provides streamlined interactions with CURATOR and offers a range of useful supporting functionality.",
"title": ""
},
{
"docid": "c1fa2b5da311edb241dca83edcf327a4",
"text": "The growing amount of web-based attacks poses a severe threat to the security of web applications. Signature-based detection techniques increasingly fail to cope with the variety and complexity of novel attack instances. As a remedy, we introduce a protocol-aware reverse HTTP proxy TokDoc (the token doctor), which intercepts requests and decides on a per-token basis whether a token requires automatic \"healing\". In particular, we propose an intelligent mangling technique, which, based on the decision of previously trained anomaly detectors, replaces suspicious parts in requests by benign data the system has seen in the past. Evaluation of our system in terms of accuracy is performed on two real-world data sets and a large variety of recent attacks. In comparison to state-of-the-art anomaly detectors, TokDoc is not only capable of detecting most attacks, but also significantly outperforms the other methods in terms of false positives. Runtime measurements show that our implementation can be deployed as an inline intrusion prevention system.",
"title": ""
},
{
"docid": "0cdf08bd9c2e63f0c9bb1dd7472a23a8",
"text": "Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such voluntary eye and attentional shifts. Although the important role of high level stimulus properties (e.g., semantic information) in search stands undisputed, most models are based on low-level image properties. We here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on, based on eye-movement recordings of humans observing photographs of natural scenes, most of which contained at least one person. Observers, even when not instructed to look for anything particular, fixate on a face with a probability of over 80% within their first two fixations; furthermore, they exhibit more similar scanpaths when faces are present. Remarkably, our model’s predictive performance in images that do not contain faces is not impaired, and is even improved in some cases by spurious face detector responses.",
"title": ""
},
{
"docid": "11d130f2b757bab08c4d41169c29b3d5",
"text": "We present an approach to training a joint syntactic and semantic parser that combines syntactic training information from CCGbank with semantic training information from a knowledge base via distant supervision. The trained parser produces a full syntactic parse of any sentence, while simultaneously producing logical forms for portions of the sentence that have a semantic representation within the parser’s predicate vocabulary. We demonstrate our approach by training a parser whose semantic representation contains 130 predicates from the NELL ontology. A semantic evaluation demonstrates that this parser produces logical forms better than both comparable prior work and a pipelined syntax-then-semantics approach. A syntactic evaluation on CCGbank demonstrates that the parser’s dependency Fscore is within 2.5% of state-of-the-art.",
"title": ""
},
{
"docid": "729b29b5ab44102541f3ebf8d24efec3",
"text": "In the cognitive neuroscience literature on the distinction between categorical and coordinate spatial relations, it has often been observed that categorical spatial relations are referred to linguistically by words like English prepositions, many of which specify binary oppositions-e.g., above/below, left/right, on/off, in/out. However, the actual semantic content of English prepositions, and of comparable word classes in other languages, has not been carefully considered. This paper has three aims. The first and most important aim is to inform cognitive neuroscientists interested in spatial representation about relevant research on the kinds of categorical spatial relations that are encoded in the 6000+ languages of the world. Emphasis is placed on cross-linguistic similarities and differences involving deictic relations, topological relations, and projective relations, the last of which are organized around three distinct frames of reference--intrinsic, relative, and absolute. The second aim is to review what is currently known about the neuroanatomical correlates of linguistically encoded categorical spatial relations, with special focus on the left supramarginal and angular gyri, and to suggest ways in which cross-linguistic data can help guide future research in this area of inquiry. The third aim is to explore the interface between language and other mental systems, specifically by summarizing studies which suggest that although linguistic and perceptual/cognitive representations of space are at least partially distinct, language nevertheless has the power to bring about not only modifications of perceptual sensitivities but also adjustments of cognitive styles.",
"title": ""
},
{
"docid": "0acf9ef6e025805a76279d1c6c6c55e7",
"text": "Android mobile devices are enjoying a lion's market share in smartphones and mobile devices. This also attracts malware writers to target the Android platform. Recently, we have discovered a new Android malware distribution channel: releasing malicious firmwares with pre-installed malware to the wild. This poses significant risk since users of mobile devices cannot change the content of the malicious firmwares. Furthermore, pre-installed applications have \" more permissions\" (i.e., silent installation) than other legitimate mobile apps, so they can download more malware or access users' confidential information. To understand and address this new form of malware distribution channel, we design and implement \"DroidRay\": a security evaluation system for customized Android firmwares. DroidRay uses both static and dynamic analyses to evaluate the firmware security on both the application and system levels. To understand the impact of this new malware distribution channel, we analyze 250 Android firmwares and 24,009 pre-installed applications. We reveal how the malicious firmware and pre-installed malware are injected, and discovered 1,947 (8.1%) pre-installed applications have signature vulnerability and 19 (7.6%) firmwares contain pre-installed malware. In addition, 142 (56.8%) firmwares have the default signature vulnerability, five (2.0%) firmwares contain malicious hosts file, at most 40 (16.0%) firmwares have the native level privilege escalation vulnerability and at least 249 (99.6%) firmwares have the Java level privilege escalation vulnerability. Lastly, we investigate a real-world case of a pre-installed zero-day malware known as CEPlugnew, which involves 348,018 infected Android smartphones, and we show its degree and geographical penetration. This shows the significance of this new malware distribution channel, and DroidRay is an effective tool to combat this new form of malware spreading.",
"title": ""
},
{
"docid": "00eaa437ad2821482644ee75cfe6d7b3",
"text": "A 65nm digitally-modulated polar transmitter incorporates a fully-integrated 2.4GHz efficient switching Inverse Class D power amplifier. Low power digital filtering on the amplitude path helps remove spectral images for coexistence. The transmitter integrates the complete LO distribution network and digital drivers. Operating from a 1-V supply, the PA has 21.8dBm peak output power with 44% efficiency. Simple static predistortion helps the transmitter meet EVM and mask requirements of 802.11g 54Mbps WLAN standard with 18% average efficiency.",
"title": ""
},
{
"docid": "8756441420669a6845254242030e0a79",
"text": "We propose a recurrent neural network (RNN) based model for image multi-label classification. Our model uniquely integrates and learning of visual attention and Long Short Term Memory (LSTM) layers, which jointly learns the labels of interest and their co-occurrences, while the associated image regions are visually attended. Different from existing approaches utilize either model in their network architectures, training of our model does not require pre-defined label orders. Moreover, a robust inference process is introduced so that prediction errors would not propagate and thus affect the performance. Our experiments on NUS-WISE and MS-COCO datasets confirm the design of our network and its effectiveness in solving multi-label classification problems.",
"title": ""
},
{
"docid": "6987cb6d888d439220938d805cae29b0",
"text": "Entity Linking aims to link entity mentions in texts to knowledge bases, and neural models have achieved recent success in this task. However, most existing methods rely on local contexts to resolve entities independently, which may usually fail due to the data sparsity of local information. To address this issue, we propose a novel neural model for collective entity linking, named as NCEL. NCEL applies Graph Convolutional Network to integrate both local contextual features and global coherence information for entity linking. To improve the computation efficiency, we approximately perform graph convolution on a subgraph of adjacent entity mentions instead of those in the entire text. We further introduce an attention scheme to improve the robustness of NCEL to data noise and train the model on Wikipedia hyperlinks to avoid overfitting and domain bias. In experiments, we evaluate NCEL on five publicly available datasets to verify the linking performance as well as generalization ability. We also conduct an extensive analysis of time complexity, the impact of key modules, and qualitative results, which demonstrate the effectiveness and efficiency of our proposed method.",
"title": ""
},
{
"docid": "3840b8c709a8b2780b3d4a1b56bd986b",
"text": "A new scheme to resolve the intra-cell pilot collision for machine-to-machine (M2M) communication in crowded massive multiple-input multiple-output (MIMO) systems is proposed. The proposed scheme permits those failed user equipments (UEs), judged by a strongest-user collision resolution (SUCR) protocol, to contend for the idle pilots, i.e., the pilots that are not selected by any UE in the initial step. This scheme is called as SUCR combined idle pilots access (SUCR-IPA). To analyze the performance of the SUCR-IPA scheme, we develop a simple method to compute the access success probability of the UEs in each random access slot. The simulation results coincide well with the analysis. It is also shown that, compared with the SUCR protocol, the proposed SUCR-IPA scheme increases the throughput of the system significantly, and thus decreases the number of access attempts dramatically.",
"title": ""
}
] | scidocsrr |
cc5c0ab4f614ed9d050a47dfa842d177 | Supervised topic models for multi-label classification | [
{
"docid": "c44f060f18e55ccb1b31846e618f3282",
"text": "In multi-label classification, each sample can be associated with a set of class labels. When the number of labels grows to the hundreds or even thousands, existing multi-label classification methods often become computationally inefficient. In recent years, a number of remedies have been proposed. However, they are based either on simple dimension reduction techniques or involve expensive optimization problems. In this paper, we address this problem by selecting a small subset of class labels that can approximately span the original label space. This is performed by an efficient randomized sampling procedure where the sampling probability of each class label reflects its importance among all the labels. Experiments on a number of realworld multi-label data sets with many labels demonstrate the appealing performance and efficiency of the proposed algorithm.",
"title": ""
}
] | [
{
"docid": "d437e700df5c3a4d824b177c95def4ac",
"text": "In this paper, we introduce a system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant. Interactive theorem provers such as Coq enable users to construct machine-checkable proofs in a step-by-step manner. Hence, they provide an opportunity to explore theorem proving at a human level of abstraction. We use GamePad to synthesize proofs for a simple algebraic rewrite problem and train baseline models for a formalization of the Feit-Thompson theorem. We address position evaluation (i.e., predict the number of proof steps left) and tactic prediction (i.e., predict the next proof step) tasks, which arise naturally in human-level theorem proving.",
"title": ""
},
{
"docid": "d7acbf20753e2c9c50b2ab0683d7f03a",
"text": "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises/corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods.",
"title": ""
},
{
"docid": "3e7a9fa9f575270a5cdf8f869d4a75dd",
"text": "The recently proposed semi-supervised learning methods exploit consistency loss between different predictions under random perturbations. Typically, a student model is trained to predict consistently with the targets generated by a noisy teacher. However, they ignore the fact that not all training data provide meaningful and reliable information in terms of consistency. For misclassified data, blindly minimizing the consistency loss around them can hinder learning. In this paper, we propose a novel certaintydriven consistency loss (CCL) to dynamically select data samples that have relatively low uncertainty. Specifically, we measure the variance or entropy of multiple predictions under random augmentations and dropout as an estimation of uncertainty. Then, we introduce two approaches, i.e. Filtering CCL and Temperature CCL to guide the student learn more meaningful and certain/reliable targets, and hence improve the quality of the gradients backpropagated to the student. Experiments demonstrate the advantages of the proposed method over the state-of-the-art semi-supervised deep learning methods on three benchmark datasets: SVHN, CIFAR10, and CIFAR100. Our method also shows robustness to noisy labels.",
"title": ""
},
{
"docid": "4f6ce186679f9ab4f0aaada92ccf5a84",
"text": "Sensor networks have a significant potential in diverse applications some of which are already beginning to be deployed in areas such as environmental monitoring. As the application logic becomes more complex, programming difficulties are becoming a barrier to adoption of these networks. The difficulty in programming sensor networks is not only due to their inherently distributed nature but also the need for mechanisms to address their harsh operating conditions such as unreliable communications, faulty nodes, and extremely constrained resources. Researchers have proposed different programming models to overcome these difficulties with the ultimate goal of making programming easy while making full use of available resources. In this article, we first explore the requirements for programming models for sensor networks. Then we present a taxonomy of the programming models, classified according to the level of abstractions they provide. We present an evaluation of various programming models for their responsiveness to the requirements. Our results point to promising efforts in the area and a discussion of the future directions of research in this area.",
"title": ""
},
{
"docid": "993d7ee2498f7b19ae70850026c0a0c4",
"text": "We present ALL-IN-1, a simple model for multilingual text classification that does not require any parallel data. It is based on a traditional Support Vector Machine classifier exploiting multilingual word embeddings and character n-grams. Our model is simple, easily extendable yet very effective, overall ranking 1st (out of 12 teams) in the IJCNLP 2017 shared task on customer feedback analysis in four languages: English, French, Japanese and Spanish.",
"title": ""
},
{
"docid": "65bf805e87a02c4e733c7e6cefbf8c7d",
"text": "We describe a nonlinear observer-based design for control of vehicle traction that is important in providing safety and obtaining desired longitudinal vehicle motion. First, a robust sliding mode controller is designed to maintain the wheel slip at any given value. Simulations show that longitudinal traction controller is capable of controlling the vehicle with parameter deviations and disturbances. The direct state feedback is then replaced with nonlinear observers to estimate the vehicle velocity from the output of the system (i.e., wheel velocity). The nonlinear model of the system is shown locally observable. The effects and drawbacks of the extended Kalman filters and sliding observers are shown via simulations. The sliding observer is found promising while the extended Kalman filter is unsatisfactory due to unpredictable changes in the road conditions.",
"title": ""
},
{
"docid": "3d3bc851a71f7caf96343004f1d584fe",
"text": "Next generation sequencing (NGS) has been leading the genetic study of human disease into an era of unprecedented productivity. Many bioinformatics pipelines have been developed to call variants from NGS data. The performance of these pipelines depends crucially on the variant caller used and on the calling strategies implemented. We studied the performance of four prevailing callers, SAMtools, GATK, glftools and Atlas2, using single-sample and multiple-sample variant-calling strategies. Using the same aligner, BWA, we built four single-sample and three multiple-sample calling pipelines and applied the pipelines to whole exome sequencing data taken from 20 individuals. We obtained genotypes generated by Illumina Infinium HumanExome v1.1 Beadchip for validation analysis and then used Sanger sequencing as a \"gold-standard\" method to resolve discrepancies for selected regions of high discordance. Finally, we compared the sensitivity of three of the single-sample calling pipelines using known simulated whole genome sequence data as a gold standard. Overall, for single-sample calling, the called variants were highly consistent across callers and the pairwise overlapping rate was about 0.9. Compared with other callers, GATK had the highest rediscovery rate (0.9969) and specificity (0.99996), and the Ti/Tv ratio out of GATK was closest to the expected value of 3.02. Multiple-sample calling increased the sensitivity. Results from the simulated data suggested that GATK outperformed SAMtools and glfSingle in sensitivity, especially for low coverage data. Further, for the selected discrepant regions evaluated by Sanger sequencing, variant genotypes called by exome sequencing versus the exome array were more accurate, although the average variant sensitivity and overall genotype consistency rate were as high as 95.87% and 99.82%, respectively. In conclusion, GATK showed several advantages over other variant callers for general purpose NGS analyses. The GATK pipelines we developed perform very well.",
"title": ""
},
{
"docid": "ce6041954779f1f5141cee0548ea8491",
"text": "In vivo exposure is the recommended treatment of choice for specific phobias; however, it demonstrates a high attrition rate and is not effective in all instances. The use of virtual reality (VR) has improved the acceptance of exposure treatments to some individuals. Augmented reality (AR) is a variation of VR wherein the user sees the real world augmented by virtual elements. The present study tests an AR system in the short (posttreatment) and long term (3, 6, and 12 months) for the treatment of cockroach phobia using a multiple baseline design across individuals (with 6 participants). The AR exposure therapy was applied using the \"one-session treatment\" guidelines developed by Ost, Salkovskis, and Hellström (1991). Results showed that AR was effective at treating cockroach phobia. All participants improved significantly in all outcome measures after treatment; furthermore, the treatment gains were maintained at 3, 6, and 12-month follow-up periods. This study discusses the advantages of AR as well as its potential applications.",
"title": ""
},
{
"docid": "4029bbbff0c115c8bf8c787cafc72ae0",
"text": "In recent times, data is growing rapidly in every domain such as news, social media, banking, education, etc. Due to the excessiveness of data, there is a need of automatic summarizer which will be capable to summarize the data especially textual data in original document without losing any critical purposes. Text summarization is emerged as an important research area in recent past. In this regard, review of existing work on text summarization process is useful for carrying out further research. In this paper, recent literature on automatic keyword extraction and text summarization are presented since text summarization process is highly depend on keyword extraction. This literature includes the discussion about different methodology used for keyword extraction and text summarization. It also discusses about different databases used for text summarization in several domains along with evaluation matrices. Finally, it discusses briefly about issues and research challenges faced by researchers along with future direction.",
"title": ""
},
{
"docid": "688ff3348e2d5af9b0f388fd9a99f1bf",
"text": "The core issue in this article is the empirical tracing of the connection between a variety of value orientations and the life course choices concerning living arrangements and family formation. The existence of such a connection is a crucial element in the socalled theory of the Second Demographic Transition (SDT). The underlying model is of a recursive nature and based on two effects: firstly, values-based self-selection of individuals into alternative living arrangement or household types, and secondly, event-based adaptation of values to the newly chosen household situation. Any testing of such a recursive model requires the use of panel data. Failing these, only “footprints” of the two effects can be derived and traced in cross-sectional data. Here, use is made of the latest round of the European Values Surveys of 1999-2000, mainly because no other source has such a large selection of value items. The comparison involves two Iberian countries, three western European ones, and two Scandinavian samples. The profiles of the value orientations are based on 80 items which cover a variety of dimensions (e.g. religiosity, ethics, civil morality, family values, social cohesion, expressive values, gender role orientations, trust in institutions, protest proneness and post-materialism, tolerance for minorities etc.). These are analysed according to eight different household positions based on the transitions to independent living, cohabitation and marriage, parenthood and union dissolution. Multiple Classification Analysis (MCA) is used to control for confounding effects of other relevant covariates (age, gender, education, economic activity and stratification, urbanity). Subsequently, 1 Interface Demography, Vrije Universiteit Brussel. E-mail: jrsurkyn@vub.ac.be 2 Interface Demography, Vrije Universiteit Brussel. E-mail: rlestha@vub.ac.be Demographic Research – Special Collection 3: Article 3 -Contemporary Research on European Fertility: Perspectives and Developments -46 http://www.demographic-research.org Correspondence Analysis is used to picture the proximities between the 80 value items and the eight household positions. Very similar value profiles according to household position are found for the three sets of countries, despite the fact that the onset of the SDT in Scandinavia precedes that in the Iberian countries by roughly twenty years. Moreover, the profile similarity remains intact when the comparison is extended to an extra group of seven formerly communist countries in central and Eastern Europe. Such pattern robustness is supportive of the contention that the ideational or “cultural” factor is indeed a nonredundant and necessary (but not a sufficient) element in the explanation of the demographic changes of the SDT. Moreover, the profile similarity also points in the direction of the operation of comparable mechanisms of selection and adaptation in the contrasting European settings. Demographic Research – Special Collection 3: Article 3 -Contemporary Research on European Fertility: Perspectives and Developments -http://www.demographic-research.org 47",
"title": ""
},
{
"docid": "bb49674d0a1f36e318d27525b693e51d",
"text": "prevent attackers from gaining control of the system using well established techniques such as; perimeter-based fire walls, redundancy and replications, and encryption. However, given sufficient time and resources, all these methods can be defeated. Moving Target Defense (MTD), is a defensive strategy that aims to reduce the need to continuously fight against attacks by disrupting attackers gain-loss balance. We present Mayflies, a bio-inspired generic MTD framework for distributed systems on virtualized cloud platforms. The framework enables systems designed to defend against attacks for their entire runtime to systems that avoid attacks in time intervals. We discuss the design, algorithms and the implementation of the framework prototype. We illustrate the prototype with a quorum-based Byzantime Fault Tolerant system and report the preliminary results.",
"title": ""
},
{
"docid": "6e05f588374b57f95524b04fe5600917",
"text": "Matrix factorization (MF) models and their extensions are standard in modern recommender systems. MF models decompose the observed user-item interaction matrix into user and item latent factors. In this paper, we propose a co-factorization model, CoFactor, which jointly decomposes the user-item interaction matrix and the item-item co-occurrence matrix with shared item latent factors. For each pair of items, the co-occurrence matrix encodes the number of users that have consumed both items. CoFactor is inspired by the recent success of word embedding models (e.g., word2vec) which can be interpreted as factorizing the word co-occurrence matrix. We show that this model significantly improves the performance over MF models on several datasets with little additional computational overhead. We provide qualitative results that explain how CoFactor improves the quality of the inferred factors and characterize the circumstances where it provides the most significant improvements.",
"title": ""
},
{
"docid": "058db5e1a8c58a9dc4b68f6f16847abc",
"text": "Insurance companies must manage millions of claims per year. While most of these claims are non-fraudulent, fraud detection is core for insurance companies. The ultimate goal is a predictive model to single out the fraudulent claims and pay out the non-fraudulent ones immediately. Modern machine learning methods are well suited for this kind of problem. Health care claims often have a data structure that is hierarchical and of variable length. We propose one model based on piecewise feed forward neural networks (deep learning) and another model based on self-attention neural networks for the task of claim management. We show that the proposed methods outperform bagof-words based models, hand designed features, and models based on convolutional neural networks, on a data set of two million health care claims. The proposed self-attention method performs the best.",
"title": ""
},
{
"docid": "f33134ec67d1237a39e91c0fd5bfb25a",
"text": "This research is driven by the assumption made in several user resistance studies that employees are generally resistant to change. It investigates the extent to which employees’ resistance to IT-induced change is caused by individuals’ predisposition to resist change. We develop a model of user resistance that assumes the influence of dispositional resistance to change on perceptual resistance to change, perceived ease of use, and usefulness, which in turn influence user resistance behavior. Using an empirical study of 106 HR employees forced to use a new human resources information system, the analysis reveals that 17.0–22.1 percent of the variance in perceived ease of use, usefulness, and perceptual resistance to change can be explained by the dispositional inclination to change initiatives. The four dimensions of dispositional resistance to change – routine seeking, emotional reaction, short-term focus and cognitive rigidity – have an even stronger effect than other common individual variables, such as age, gender, or working experiences. We conclude that dispositional resistance to change is an example of an individual difference that is instrumental in explaining a large proportion of the variance in beliefs about and user resistance to mandatory IS in organizations, which has implications for theory, practice, and future research. Journal of Information Technology advance online publication, 16 June 2015; doi:10.1057/jit.2015.17",
"title": ""
},
{
"docid": "e7e1fd16be5186474dc9e1690347716a",
"text": "One-stage object detectors such as SSD or YOLO already have shown promising accuracy with small memory footprint and fast speed. However, it is widely recognized that one-stage detectors have difficulty in detecting small objects while they are competitive with two-stage methods on large objects. In this paper, we investigate how to alleviate this problem starting from the SSD framework. Due to their pyramidal design, the lower layer that is responsible for small objects lacks strong semantics(e.g contextual information). We address this problem by introducing a feature combining module that spreads out the strong semantics in a top-down manner. Our final model StairNet detector unifies the multi-scale representations and semantic distribution effectively. Experiments on PASCAL VOC 2007 and PASCAL VOC 2012 datasets demonstrate that Stair-Net significantly improves the weakness of SSD and outperforms the other state-of-the-art one-stage detectors.",
"title": ""
},
{
"docid": "4d2bfda62140962af079817fc7dbd43e",
"text": "Online health communities and support groups are a valuable source of information for users suffering from a physical or mental illness. Users turn to these forums for moral support or advice on specific conditions, symptoms, or side effects of medications. This paper describes and studies the linguistic patterns of a community of support forum users over time focused on the used of anxious related words. We introduce a methodology to identify groups of individuals exhibiting linguistic patterns associated with anxiety and the correlations between this linguistic pattern and other word usage. We find some evidence that participation in these groups does yield positive effects on their users by reducing the frequency of anxious related word used over time.",
"title": ""
},
{
"docid": "0b01870332dd93897fbcecb9254c40b9",
"text": "Computer-aided detection or decision support systems aim to improve breast cancer screening programs by helping radiologists to evaluate digital mammography (DM) exams. Commonly such methods proceed in two steps: selection of candidate regions for malignancy, and later classification as either malignant or not. In this study, we present a candidate detection method based on deep learning to automatically detect and additionally segment soft tissue lesions in DM. A database of DM exams (mostly bilateral and two views) was collected from our institutional archive. In total, 7196 DM exams (28294 DM images) acquired with systems from three different vendors (General Electric, Siemens, Hologic) were collected, of which 2883 contained malignant lesions verified with histopathology. Data was randomly split on an exam level into training (50%), validation (10%) and testing (40%) of deep neural network with u-net architecture. The u-net classifies the image but also provides lesion segmentation. Free receiver operating characteristic (FROC) analysis was used to evaluate the model, on an image and on an exam level. On an image level, a maximum sensitivity of 0.94 at 7.93 false positives (FP) per image was achieved. Similarly, per exam a maximum sensitivity of 0.98 at 7.81 FP per image was achieved. In conclusion, the method could be used as a candidate selection model with high accuracy and with the additional information of lesion segmentation.",
"title": ""
},
{
"docid": "bf239cb017be0b2137b0b4fd1f1d4247",
"text": "Network function virtualization was recently proposed to improve the flexibility of network service provisioning and reduce the time to market of new services. By leveraging virtualization technologies and commercial off-the-shelf programmable hardware, such as general-purpose servers, storage, and switches, NFV decouples the software implementation of network functions from the underlying hardware. As an emerging technology, NFV brings several challenges to network operators, such as the guarantee of network performance for virtual appliances, their dynamic instantiation and migration, and their efficient placement. In this article, we provide a brief overview of NFV, explain its requirements and architectural framework, present several use cases, and discuss the challenges and future directions in this burgeoning research area.",
"title": ""
},
{
"docid": "3e7adbc4ea0bb5183792efd19d3c23a5",
"text": "a Faculty of Science and Information Technology, Al-Zaytoona University of Jordan, Amman, Jordan b School of Informatics, University of Bradford, Bradford BD7 1DP, United Kingdom c Information & Computer Science Department, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia d Centre for excellence in Signal and Image Processing, Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow, G1 1XW, United Kingdom",
"title": ""
},
{
"docid": "532f3aee6b67f1e521ccda7f77116f7a",
"text": "Status of this Memo By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as \"work in progress.\" The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1idabstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. This Internet-Draft will expire on May 2008.",
"title": ""
}
] | scidocsrr |
4da8e5ddac2a648e63d7d5661a25ee65 | Ethical Artificial Intelligence - An Open Question | [
{
"docid": "f76808350f95de294c2164feb634465a",
"text": "By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: \"A curious aspect of the theory of evolution is that everybody thinks he understands it.\" (Monod 1974.) My father, a physicist, complained about people making up their own theories of physics; he wanted to know why people did not make up their own theories of chemistry. (Answer: They do.) Nonetheless the problem seems to be unusually acute in Artificial Intelligence. The field of AI has a reputation for making huge promises and then failing to deliver on them. Most observers conclude that AI is hard; as indeed it is. But the embarrassment does not stem from the difficulty. It is difficult to build a star from hydrogen, but the field of stellar astronomy does not have a terrible reputation for promising to build stars and then failing. The critical inference is not that AI is hard, but that, for some reason, it is very easy for people to think they know far more about Artificial Intelligence than they actually do.",
"title": ""
}
] | [
{
"docid": "9747be055df9acedfdfe817eb7e1e06e",
"text": "Text summarization solves the problem of extracting important information from huge amount of text data. There are various methods in the literature that aim to find out well-formed summaries. One of the most commonly used methods is the Latent Semantic Analysis (LSA). In this paper, different LSA based summarization algorithms are explained and two new LSA based summarization algorithms are proposed. The algorithms are evaluated on Turkish documents, and their performances are compared using their ROUGE-L scores. One of our algorithms produces the best scores.",
"title": ""
},
{
"docid": "c0e1be5859be1fc5871993193a709f2d",
"text": "This paper reviews the possible causes and effects for no-fault-found observations and intermittent failures in electronic products and summarizes them into cause and effect diagrams. Several types of intermittent hardware failures of electronic assemblies are investigated, and their characteristics and mechanisms are explored. One solder joint intermittent failure case study is presented. The paper then discusses when no-fault-found observations should be considered as failures. Guidelines for assessment of intermittent failures are then provided in the discussion and conclusions. Ó 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f4380a5acaba5b534d13e1a4f09afe4f",
"text": "Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.",
"title": ""
},
{
"docid": "01638567bf915e26bf9398132ca27264",
"text": "Uncontrolled bleeding from the cystic artery and its branches is a serious problem that may increase the risk of intraoperative lesions to vital vascular and biliary structures. On laparoscopic visualization anatomic relations are seen differently than during conventional surgery, so proper knowledge of the hepatobiliary triangle anatomic structures under the conditions of laparoscopic visualization is required. We present an original classification of the anatomic variations of the cystic artery into two main groups based on our experience with 200 laparoscopic cholecystectomies, with due consideration of the known anatomicotopographic relations. Group I designates a cystic artery situated within the hepatobiliary triangle on laparoscopic visualization. This group included three types: (1) normally lying cystic artery, found in 147 (73.5%) patients; (2) most common cystic artery variation, manifesting as its doubling, present in 31 (15.5%) patients; and (3) the cystic artery originating from the aberrant right hepatic artery, observed in 11 (5.5%) patients. Group II designates a cystic artery that could not be found within the hepatobiliary triangle on laparoscopic dissection. This group included two types of variation: (1) cystic artery originating from the gastroduodenal artery, found in nine (4.5%) patients; and (2) cystic artery originating from the left hepatic artery, recorded in two (1%) patients.",
"title": ""
},
{
"docid": "2663800ed92ce1cd44ab1b7760c43e0f",
"text": "Synchronous reluctance motor (SynRM) have rather poor power factor. This paper investigates possible methods to improve the power factor (pf) without impacting its torque density. The study found two possible aspects to improve the power factor with either refining rotor dimensions and followed by current control techniques. Although it is a non-linear mathematical field, it is analysed by analytical equations and FEM simulation is utilized to validate the design progression. Finally, an analytical method is proposed to enhance pf without compromising machine torque density. There are many models examined in this study to verify the design process. The best design with high performance is used for final current control optimization simulation.",
"title": ""
},
{
"docid": "c9750e95b3bd422f0f5e73cf6c465b35",
"text": "Lingual nerve damage complicating oral surgery would sometimes require electrographic exploration. Nevertheless, direct recording of conduction in lingual nerve requires its puncture at the foramen ovale. This method is too dangerous to be practiced routinely in these diagnostic indications. The aim of our study was to assess spatial relationships between lingual nerve and mandibular ramus in the infratemporal fossa using an original technique. Therefore, ten lingual nerves were dissected on five fresh cadavers. All the nerves were catheterized with a 3/0 wire. After meticulous repositioning of the nerve and medial pterygoid muscle reinsertion, CT-scan examinations were performed with planar acquisitions and three-dimensional reconstructions. Localization of lingual nerve in the infratemporal fossa was assessed successively at the level of the sigmoid notch of the mandible, lingula and third molar. At the level of the lingula, lingual nerve was far from the maxillary vessels; mean distance between the nerve and the anterior border of the ramus was 19.6 mm. The posteriorly opened angle between the medial side of the ramus and the line joining the lingual nerve and the anterior border of the ramus measured 17°. According to these findings, we suggest that the lingual nerve might be reached through the intra-oral puncture at the intermaxillary commissure; therefore, we modify the inferior alveolar nerve block technique to propose a safe and reproducible protocol likely to be performed routinely as electrographic exploration of the lingual nerve. What is more, this original study protocol provided interesting educational materials and could be developed for the conception of realistic 3D virtual anatomy supports.",
"title": ""
},
{
"docid": "3d56f88bf8053258a12e609129237b19",
"text": "Thepresentstudyfocusesontherelationships between entrepreneurial characteristics (achievement orientation, risk taking propensity, locus of control, and networking), e-service business factors (reliability, responsiveness, ease of use, and self-service), governmental support, and the success of e-commerce entrepreneurs. Results confirm that the achievement orientation and locus of control of founders and business emphasis on reliability and ease of use functions of e-service quality are positively related to the success of e-commerce entrepreneurial ventures in Thailand. Founder risk taking and networking, e-service responsiveness and self-service, and governmental support are found to be non-significant.",
"title": ""
},
{
"docid": "dbde47a4142bffc2bcbda988781e5229",
"text": "Grasping individual objects from an unordered pile in a box has been investigated in static scenarios so far. In this paper, we demonstrate bin picking with an anthropomorphic mobile robot. To this end, we extend global navigation techniques by precise local alignment with a transport box. Objects are detected in range images using a shape primitive-based approach. Our approach learns object models from single scans and employs active perception to cope with severe occlusions. Grasps and arm motions are planned in an efficient local multiresolution height map. All components are integrated and evaluated in a bin picking and part delivery task.",
"title": ""
},
{
"docid": "730d25d97f4ad67838a541f206cfcec2",
"text": "Semantic segmentation of 3D point clouds is a challenging problem with numerous real-world applications. While deep learning has revolutionized the field of image semantic segmentation, its impact on point cloud data has been limited so far. Recent attempts, based on 3D deep learning approaches (3DCNNs), have achieved below-expected results. Such methods require voxelizations of the underlying point cloud data, leading to decreased spatial resolution and increased memory consumption. Additionally, 3D-CNNs greatly suffer from the limited availability of annotated datasets. In this paper, we propose an alternative framework that avoids the limitations of 3D-CNNs. Instead of directly solving the problem in 3D, we first project the point cloud onto a set of synthetic 2D-images. These images are then used as input to a 2D-CNN, designed for semantic segmentation. Finally, the obtained prediction scores are re-projected to the point cloud to obtain the segmentation results. We further investigate the impact of multiple modalities, such as color, depth and surface normals, in a multi-stream network architecture. Experiments are performed on the recent Semantic3D dataset. Our approach sets a new stateof-the-art by achieving a relative gain of 7.9%, compared to the previous best approach.",
"title": ""
},
{
"docid": "a3b18ade3e983d91b7a8fc8d4cb6a75d",
"text": "The IC stripline method is one of those suggested in IEC-62132 to evaluate the susceptibility of ICs to radiated electromagnetic interference. In practice, it allows the multiple injection of the interference through the capacitive and inductive coupling of the IC package with the guiding structure (the stripline) in which the device under test is inserted. The pros and cons of this method are discussed and a variant of it is proposed with the aim to address the main problems that arise when evaluating the susceptibility of ICs encapsulated in small packages.",
"title": ""
},
{
"docid": "385fc1f02645d4d636869317cde6d35e",
"text": "Events and their coreference offer useful semantic and discourse resources. We show that the semantic and discourse aspects of events interact with each other. However, traditional approaches addressed event extraction and event coreference resolution either separately or sequentially, which limits their interactions. This paper proposes a document-level structured learning model that simultaneously identifies event triggers and resolves event coreference. We demonstrate that the joint model outperforms a pipelined model by 6.9 BLANC F1 and 1.8 CoNLL F1 points in event coreference resolution using a corpus in the biology domain.",
"title": ""
},
{
"docid": "c0dbb410ebd6c84bd97b5f5e767186b3",
"text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.",
"title": ""
},
{
"docid": "ac65c09468cd88765009abe49d9114cf",
"text": "It is known that head gesture and brain activity can reflect some human behaviors related to a risk of accident when using machine-tools. The research presented in this paper aims at reducing the risk of injury and thus increase worker safety. Instead of using camera, this paper presents a Smart Safety Helmet (SSH) in order to track the head gestures and the brain activity of the worker to recognize anomalous behavior. Information extracted from SSH is used for computing risk of an accident (a safety level) for preventing and reducing injuries or accidents. The SSH system is an inexpensive, non-intrusive, non-invasive, and non-vision-based system, which consists of an Inertial Measurement Unit (IMU) and dry EEG electrodes. A haptic device, such as vibrotactile motor, is integrated to the helmet in order to alert the operator when computed risk level (fatigue, high stress or error) reaches a threshold. Once the risk level of accident breaks the threshold, a signal will be sent wirelessly to stop the relevant machine tool or process.",
"title": ""
},
{
"docid": "500eca6c6fb88958662fd0210927d782",
"text": "Purpose – Force output is extremely important for electromagnetic linear machines. The purpose of this study is to explore new permanent magnet (PM) array and winding patterns to increase the magnetic flux density and thus to improve the force output of electromagnetic tubular linear machines. Design/methodology/approach – Based on investigations on various PM patterns, a novel dual Halbach PM array is proposed in this paper to increase the radial component of flux density in three-dimensional machine space, which in turn can increase the force output of tubular linear machine significantly. The force outputs and force ripples for different winding patterns are formulated and analyzed, to select optimized structure parameters. Findings – The proposed dual Halbach array can increase the radial component of flux density and force output of tubular linear machines effectively. It also helps to decrease the axial component of flux density and thus to reduce the deformation and vibration of machines. By using analytical force models, the influence of winding patterns and structure parameters on the machine force output and force ripples can be analyzed. As a result, one set of optimized structure parameters are selected for the design of electromagnetic tubular linear machines. Originality/value – The proposed dual Halbach array and winding patterns are effective ways to improve the linear machine performance. It can also be implemented into rotary machines. The analyzing and design methods could be extended into the development of other electromagnetic machines.",
"title": ""
},
{
"docid": "91e9f3b1ebd57ff472ab8848370c366f",
"text": "Time series prediction problems are becoming increasingly high-dimensional in modern applications, such as climatology and demand forecasting. For example, in the latter problem, the number of items for which demand needs to be forecast might be as large as 50,000. In addition, the data is generally noisy and full of missing values. Thus, modern applications require methods that are highly scalable, and can deal with noisy data in terms of corruptions or missing values. However, classical time series methods usually fall short of handling these issues. In this paper, we present a temporal regularized matrix factorization (TRMF) framework which supports data-driven temporal learning and forecasting. We develop novel regularization schemes and use scalable matrix factorization methods that are eminently suited for high-dimensional time series data that has many missing values. Our proposed TRMF is highly general, and subsumes many existing approaches for time series analysis. We make interesting connections to graph regularization methods in the context of learning the dependencies in an autoregressive framework. Experimental results show the superiority of TRMF in terms of scalability and prediction quality. In particular, TRMF is two orders of magnitude faster than other methods on a problem of dimension 50,000, and generates better forecasts on real-world datasets such as Wal-mart E-commerce datasets.",
"title": ""
},
{
"docid": "19f9e643decc8047d73a20d664eb458d",
"text": "There is considerable federal interest in disaster resilience as a mechanism for mitigating the impacts to local communities, yet the identification of metrics and standards for measuring resilience remain a challenge. This paper provides a methodology and a set of indicators for measuring baseline characteristics of communities that foster resilience. By establishing baseline conditions, it becomes possible to monitor changes in resilience over time in particular places and to compare one place to another. We apply our methodology to counties within the Southeastern United States as a proof of concept. The results show that spatial variations in disaster resilience exist and are especially evident in the rural/urban divide, where metropolitan areas have higher levels of resilience than rural counties. However, the individual drivers of the disaster resilience (or lack thereof)—social, economic, institutional, infrastructure, and community capacities—vary",
"title": ""
},
{
"docid": "6751bfa8495065db8f6f5b396bbbc2cd",
"text": "This paper proposes a new balanced realization and model reduction method for possibly unstable systems by introducing some new controllability and observability Gramians. These Gramians can be related to minimum control energy and minimum estimation error. In contrast to Gramians defined in the literature for unstable systems, these Gramians can always be computed for systems without imaginary axis poles and they reduce to the standard controllability and observability Gramians when the systems are stable. The proposed balanced model reduction method enjoys the similar error bounds as does for the standard balanced model reduction. Furthermore, the new error bounds and the actual approximation errors seem to be much smaller than the ones using the methods given in the literature for unstable systems. Copyright ( 1999 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "5b9a08e4edd7e44ed261d304bc8f78c3",
"text": "Cone beam computed tomography (CBCT) has been specifically designed to produce undistorted three-dimensional information of the maxillofacial skeleton, including the teeth and their surrounding tissues with a significantly lower effective radiation dose compared with conventional computed tomography (CT). Periapical disease may be detected sooner using CBCT compared with periapical views and the true size, extent, nature and position of periapical and resorptive lesions can be assessed. Root fractures, root canal anatomy and the nature of the alveolar bone topography around teeth may be assessed. The aim of this paper is to review current literature on the applications and limitations of CBCT in the management of endodontic problems.",
"title": ""
},
{
"docid": "0332be71a529382e82094239db31ea25",
"text": "Nguyen and Shparlinski recently presented a polynomial-time algorithm that provably recovers the signer’s secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).",
"title": ""
},
{
"docid": "d6da3d9b1357c16bb2d9ea46e56fa60f",
"text": "The Supervisory Control and Data Acquisition System (SCADA) monitor and control real-time systems. SCADA systems are the backbone of the critical infrastructure, and any compromise in their security can have grave consequences. Therefore, there is a need to have a SCADA testbed for checking vulnerabilities and validating security solutions. In this paper we develop such a SCADA testbed.",
"title": ""
}
] | scidocsrr |
2100642ab81be76885180790c4aaaa95 | Interactive Dimensionality Reduction Through User-defined Combinations of Quality Metrics | [
{
"docid": "ed7a114d02244b7278c8872c567f1ba6",
"text": "We present a new visualization, called the Table Lens, for visualizing and making sense of large tables. The visualization uses a focus+context (fisheye) technique that works effectively on tabular information because it allows display of crucial label information and multiple distal focal areas. In addition, a graphical mapping scheme for depicting table contents has been developed for the most widespread kind of tables, the cases-by-variables table. The Table Lens fuses symbolic and graphical representations into a single coherent view that can be fluidly adjusted by the user. This fusion and interactivity enables an extremely rich and natural style of direct manipulation exploratory data analysis.",
"title": ""
}
] | [
{
"docid": "8f2b9981d15b8839547f56f5f1152882",
"text": "In this paper we study how to discover the evolution of topics over time in a time-stamped document collection. Our approach is uniquely designed to capture the rich topology of topic evolution inherent in the corpus. Instead of characterizing the evolving topics at fixed time points, we conceptually define a topic as a quantized unit of evolutionary change in content and discover topics with the time of their appearance in the corpus. Discovered topics are then connected to form a topic evolution graph using a measure derived from the underlying document network. Our approach allows inhomogeneous distribution of topics over time and does not impose any topological restriction in topic evolution graphs. We evaluate our algorithm on the ACM corpus.\n The topic evolution graphs obtained from the ACM corpus provide an effective and concrete summary of the corpus with remarkably rich topology that are congruent to our background knowledge. In a finer resolution, the graphs reveal concrete information about the corpus that were previously unknown to us, suggesting the utility of our approach as a navigational tool for the corpus.",
"title": ""
},
{
"docid": "673ce42f089d555d8457f35bf7dcb733",
"text": "Visual relationship detection aims to capture interactions between pairs of objects in images. Relationships between objects and humans represent a particularly important subset of this problem, with implications for challenges such as understanding human behaviour, and identifying affordances, amongst others. In addressing this problem we first construct a large-scale human-centric visual relationship detection dataset (HCVRD), which provides many more types of relationship annotation (nearly 10K categories) than the previous released datasets. This large label space better reflects the reality of human-object interactions, but gives rise to a long-tail distribution problem, which in turn demands a zero-shot approach to labels appearing only in the test set. This is the first time this issue has been addressed. We propose a webly-supervised approach to these problems and demonstrate that the proposed model provides a strong baseline on our HCVRD dataset.",
"title": ""
},
{
"docid": "f7a6cc4ebc1d2657175301dc05c86a7b",
"text": "Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.",
"title": ""
},
{
"docid": "8bcb5b946b9f5e07807ec9a44884cf4e",
"text": "Using data from two waves of a panel study of families who currently or recently received cash welfare benefits, we test hypotheses about the relationship between food hardships and behavior problems among two different age groups (458 children ages 3–5-and 747 children ages 6–12). Results show that food hardships are positively associated with externalizing behavior problems for older children, even after controlling for potential mediators such as parental stress, warmth, and depression. Food hardships are positively associated with internalizing behavior problems for older children, and with both externalizing and internalizing behavior problems for younger children, but these effects are mediated by parental characteristics. The implications of these findings for child and family interventions and food assistance programs are discussed. Food Hardships and Child Behavior Problems among Low-Income Children INTRODUCTION In the wake of the 1996 federal welfare reforms, several large-scale, longitudinal studies of welfare recipients and low-income families were launched with the intent of assessing direct benchmarks, such as work and welfare activity, over time, as well as indirect and unintended outcomes related to material hardship and mental health. One area of special concern to many researchers and policymakers alike is child well-being in the context of welfare reforms. As family welfare use and parental work activities change under new welfare policies, family income and material resources may also fluctuate. To the extent that family resources are compromised by changes in welfare assistance and earnings, children may experience direct hardships, such as instability in food consumption, which in turn may affect other areas of functioning. It is also possible that changes in parental work and family welfare receipt influence children indirectly through their caregivers. As parents themselves experience hardships or new stresses, their mental health and interactions with their children may change, which in turn could affect their children’s functioning. This research assesses whether one particular form of hardship, food hardship, is associated with adverse behaviors among low-income children. Specifically, analyses assess whether food hardships have relationships with externalizing (e.g., aggressive or hyperactive) and internalizing (e.g., anxietyand depression-related) child behavior problems, and whether associations between food hardships and behavior problems are mediated by parental stress, warmth, and depression. The study involves a panel survey of individuals in one state who were receiving Temporary Assistance for Needy Families (TANF) in 1998 and were caring for minor-aged children. Externalizing and internalizing behavior problems associated with a randomly selected child from each household are assessed in relation to key predictors, taking advantage of the prospective study design. 2 BACKGROUND Food hardships have been conceptualized by researchers in various ways. For example, food insecurity is defined by the U.S. Department of Agriculture (USDA) as the “limited or uncertain availability of nutritionally adequate and safe foods or limited or uncertain ability to acquire acceptable foods in socially acceptable ways” (Bickel, Nord, Price, Hamilton, and Cook, 2000, p. 6). An 18-item scale was developed by the USDA to assess household food insecurity with and without hunger, where hunger represents a potential result of more severe forms of food insecurity, but not a necessary condition for food insecurity to exist (Price, Hamilton, and Cook, 1997). Other researchers have used selected items from the USDA Food Security Module to assess food hardships (Nelson, 2004; Bickel et al., 2000) The USDA also developed the following single-item question to identify food insufficiency: “Which of the following describes the amount of food your household has to eat....enough to eat, sometimes not enough to eat, or often not enough to eat?” This measure addresses the amount of food available to a household, not assessments about the quality of the food consumed or worries about food (Alaimo, Olson and Frongillo, 1999; Dunifon and Kowaleski-Jones, 2003). The Community Childhood Hunger Identification Project (CCHIP) assesses food hardships using an 8-item measure to determine whether the household as a whole, adults as individuals, or children are affected by food shortages, perceived food insufficiency, or altered food intake due to resource constraints (Wehler, Scott, and Anderson, 1992). Depending on the number of affirmative answers, respondents are categorized as either “hungry,” “at-risk for hunger,” or “not hungry” (Wehler et al., 1992; Kleinman et al., 1998). Other measures, such as the Radimer/Cornell measures of hunger and food insecurity, have also been created to measure food hardships (Kendall, Olson, and Frongillo, 1996). In recent years, food hardships in the United States have been on the rise. After declining from 1995 to 1999, the prevalence of household food insecurity in households with children rose from 14.8 percent in 1999 to 16.5 percent in 2002, and the prevalence of household food insecurity with hunger in households with children rose from 0.6 percent in 1999 to 0.7 percent in 2002 (Nord, Andrews, and 3 Carlson, 2003). A similar trend was also observed using a subset of questions from the USDA Food Security Module (Nelson, 2004). Although children are more likely than adults to be buffered from household food insecurity (Hamilton et al., 1997) and inadequate nutrition (McIntyre et al., 2003), a concerning number of children are reported to skip meals or have reduced food intake due to insufficient household resources. Nationally, children in 219,000 U.S. households were hungry at times during the 12 months preceding May 1999 (Nord and Bickel, 2002). Food Hardships and Child Behavior Problems Very little research has been conducted on the effects of food hardship on children’s behaviors, although the existing research suggests that it is associated with adverse behavioral and mental health outcomes for children. Using data from the National Health and Nutrition Examination Survey (NHANES), Alaimo and colleagues (2001a) found that family food insufficiency is positively associated with visits to a psychologist among 6to 11year-olds. Using the USDA Food Security Module, Reid (2002) found that greater severity and longer periods of children’s food insecurity were associated with greater levels of child behavior problems. Dunifon and Kowaleski-Jones (2003) found, using the same measure, that food insecurity is associated with fewer positive behaviors among school-age children. Children from households with incomes at or below 185 percent of the poverty level who are identified as hungry are also more likely to have a past or current history of mental health counseling and to have more psychosocial dysfunctions than children who are not identified as hungry (Kleinman et al., 1998; Murphy et al., 1998). Additionally, severe child hunger in both pre-school-age and school-age children is associated with internalizing behavior problems (Weinreb et al., 2002), although Reid (2002) found a stronger association between food insecurity and externalizing behaviors than between food insecurity and internalizing behaviors among children 12 and younger. Other research on hunger has identified several adverse behavioral consequences for children (See Wachs, 1995 for a review; Martorell, 1996; Pollitt, 1994), including poor play behaviors, poor preschool achievement, and poor scores on 4 developmental indices (e.g., Bayley Scores). These studies have largely taken place in developing countries, where the prevalence of hunger and malnutrition is much greater than in the U.S. population (Reid, 2002), so it is not known whether similar associations would emerge for children in the United States. Furthermore, while existing studies point to a relationship between food hardships and adverse child behavioral outcomes, limitations in design stemming from cross-sectional data, reliance on singleitem measures of food difficulties, or failure to adequately control for factors that may confound the observed relationships make it difficult to assess the robustness of the findings. For current and recent recipients of welfare and their families, increased food hardships are a potential problem, given the fluctuations in benefits and resources that families are likely to experience as a result of legislative reforms. To the extent that food hardships are tied to economic factors, we may expect levels of food hardships to increase for families who experience periods of insufficient material resources, and to decrease for families whose economic situations improve. If levels of food hardship are associated with the availability of parents and other caregivers, we may find that the provision of food to children changes as parents work more hours, or as children spend more time in alternative caregiving arrangements. Poverty and Child Behavior Problems When exploring the relationship between food hardships and child well-being, it is crucial to ensure that factors associated with economic hardship and poverty are adequately controlled, particularly since poverty has been linked to some of the same outcomes as food hardships. Extensive research has shown a higher prevalence of behavior problems among children from families of lower socioeconomic status (McLoyd, 1998; Duncan, Brooks-Gunn, and Klebanov, 1994), and from families receiving welfare (Hofferth, Smith, McLoyd, and Finkelstein, 2000). This relationship has been shown to be stronger among children in single-parent households than among those in two-parent households (Hanson, McLanahan, and Thompson, 1996), and among younger children (Bradley and Corwyn, 2002; McLoyd, 5 1998), with less consistent findings for adolescents (Conger, Conger, and Elder, 1997; Elder, N",
"title": ""
},
{
"docid": "df02dafb455e2b68035cf8c150e28a0a",
"text": "Blueberry, raspberry and strawberry may have evolved strategies for survival due to the different soil conditions available in their natural environment. Since this might be reflected in their response to rhizosphere pH and N form supplied, investigations were carried out in order to compare effects of nitrate and ammonium nutrition (the latter at two different pH regimes) on growth, CO2 gas exchange, and on the activity of key enzymes of the nitrogen metabolism of these plant species. Highbush blueberry (Vaccinium corymbosum L. cv. 13–16–A), raspberry (Rubus idaeus L. cv. Zeva II) and strawberry (Fragaria × ananassa Duch. cv. Senga Sengana) were grown in 10 L black polyethylene pots in quartz sand with and without 1% CaCO3 (w: v), respectively. Nutrient solutions supplied contained nitrate (6 mM) or ammonium (6 mM) as the sole nitrogen source. Compared with strawberries fed with nitrate nitrogen, supply of ammonium nitrogen caused a decrease in net photosynthesis and dry matter production when plants were grown in quartz sand without added CaCO3. In contrast, net photosynthesis and dry matter production increased in blueberries fed with ammonium nitrogen, while dry matter production of raspberries was not affected by the N form supplied. In quartz sand with CaCO3, ammonium nutrition caused less deleterious effects on strawberries, and net photosynthesis in raspberries increased as compared to plants grown in quartz sand without CaCO3 addition. Activity of nitrate reductase (NR) was low in blueberries and could only be detected in the roots of plants supplied with nitrate nitrogen. In contrast, NR activity was high in leaves, but low in roots of raspberry and strawberry plants. Ammonium nutrition caused a decrease in NR level in leaves. Activity of glutamine synthetase (GS) was high in leaves but lower in roots of blueberry, raspberry and strawberry plants. The GS level was not significantly affected by the nitrogen source supplied. The effects of nitrate or ammonium nitrogen on net photosynthesis, growth, and activity of enzymes in blueberry, raspberry and strawberry cultivars appear to reflect their different adaptability to soil pH and N form due to the conditions of their natural environment.",
"title": ""
},
{
"docid": "cdda683f089f630176b88c1b91c1cff2",
"text": "Article history: Received 15 March 2011 Received in revised form 28 November 2011 Accepted 23 December 2011 Available online 29 December 2011",
"title": ""
},
{
"docid": "3f1ab17fb722d5a2612675673b200a82",
"text": "In this paper, we show that the recent integration of statistical models with deep recurrent neural networks provides a new way of formulating volatility (the degree of variation of time series) models that have been widely used in time series analysis and prediction in finance. The model comprises a pair of complementary stochastic recurrent neural networks: the generative network models the joint distribution of the stochastic volatility process; the inference network approximates the conditional distribution of the latent variables given the observables. Our focus here is on the formulation of temporal dynamics of volatility over time under a stochastic recurrent neural network framework. Experiments on real-world stock price datasets demonstrate that the proposed model generates a better volatility estimation and prediction that outperforms mainstream methods, e.g., deterministic models such as GARCH and its variants, and stochastic models namely the MCMC-based model stochvol as well as the Gaussian process volatility model GPVol, on average negative log-likelihood.",
"title": ""
},
{
"docid": "c3f81c5e4b162564b15be399b2d24750",
"text": "Although memory performance benefits from the spacing of information at encoding, judgments of learning (JOLs) are often not sensitive to the benefits of spacing. The present research examines how practice, feedback, and instruction influence JOLs for spaced and massed items. In Experiment 1, in which JOLs were made after the presentation of each item and participants were given multiple study-test cycles, JOLs were strongly influenced by the repetition of the items, but there was little difference in JOLs for massed versus spaced items. A similar effect was shown in Experiments 2 and 3, in which participants scored their own recall performance and were given feedback, although participants did learn to assign higher JOLs to spaced items with task experience. In Experiment 4, after participants were given direct instruction about the benefits of spacing, they showed a greater difference for JOLs of spaced vs massed items, but their JOLs still underestimated their recall for spaced items. Although spacing effects are very robust and have important implications for memory and education, people often underestimate the benefits of spaced repetition when learning, possibly due to the reliance on processing fluency during study and attending to repetition, and not taking into account the beneficial aspects of study schedule.",
"title": ""
},
{
"docid": "7490d342ffb59bd396421e198b243775",
"text": "Antioxidant activities of defatted sesame meal extract increased as the roasting temperature of sesame seed increased, but the maximum antioxidant activity was achieved when the seeds were roasted at 200 °C for 60 min. Roasting sesame seeds at 200 °C for 60 min significantly increased the total phenolic content, radical scavenging activity (RSA), reducing powers, and antioxidant activity of sesame meal extract; and several low-molecularweight phenolic compounds such as 2-methoxyphenol, 4-methoxy-3-methylthio-phenol, 5-amino-3-oxo-4hexenoic acid, 3,4-methylenedioxyphenol (sesamol), 3-hydroxy benzoic acid, 4-hydroxy benzoic acid, vanillic acid, filicinic acid, and 3,4-dimethoxy phenol were newly formed in the sesame meal after roasting sesame seeds at 200 °C for 60 min. These results indicate that antioxidant activity of defatted sesame meal extracts was significantly affected by roasting temperature and time of sesame seeds.",
"title": ""
},
{
"docid": "44d8cb42bd4c2184dc226cac3adfa901",
"text": "Several descriptions of redundancy are presented in the literature , often from widely dif ferent perspectives . Therefore , a discussion of these various definitions and the salient points would be appropriate . In particular , any definition and redundancy needs to cover the following issues ; the dif ference between multiple solutions and an infinite number of solutions ; degenerate solutions to inverse kinematics ; task redundancy ; and the distinction between non-redundant , redundant and highly redundant manipulators .",
"title": ""
},
{
"docid": "dcf7214c15c13f13d33c9a7b2c216588",
"text": "Many machine learning tasks such as multiple instance learning, 3D shape recognition and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the permutation of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating increased performance compared to recent methods for set-structured data.",
"title": ""
},
{
"docid": "74af567f4b0257dc12c3346146c0f46c",
"text": "This paper presents the experimental data of human mechanical impedance properties (HMIPs) of the arms measured in steering operations according to the angle of a steering wheel (limbs posture) and the steering torque (muscle cocontraction). The HMIP data show that human stiffness/viscosity has the minimum/maximum value at the neutral angle of the steering wheel in relax (standard condition) and increases/decreases for the amplitude of the steering angle and the torque, and that the stability of the arms' motion in handling the steering wheel becomes high around the standard condition. Next, a novel methodology for designing an adaptive steering control system based on the HMIPs of the arms is proposed, and the effectiveness was then demonstrated via a set of double-lane-change tests, with several subjects using the originally developed stationary driving simulator and the 4-DOF driving simulator with a movable cockpit.",
"title": ""
},
{
"docid": "f5648e3bd38e876b53ee748021e165f2",
"text": "The existing image captioning approaches typically train a one-stage sentence decoder, which is difficult to generate rich fine-grained descriptions. On the other hand, multi-stage image caption model is hard to train due to the vanishing gradient problem. In this paper, we propose a coarse-to-fine multi-stage prediction framework for image captioning, composed of multiple decoders each of which operates on the output of the previous stage, producing increasingly refined image descriptions. Our proposed learning approach addresses the difficulty of vanishing gradients during training by providing a learning objective function that enforces intermediate supervisions. Particularly, we optimize our model with a reinforcement learning approach which utilizes the output of each intermediate decoder’s test-time inference algorithm as well as the output of its preceding decoder to normalize the rewards, which simultaneously solves the well-known exposure bias problem and the loss-evaluation mismatch problem. We extensively evaluate the proposed approach on MSCOCO and show that our approach can achieve the state-of-the-art performance.",
"title": ""
},
{
"docid": "c3566171b68e4025931a72064e74e4ae",
"text": "Training a Fully Convolutional Network (FCN) for semantic segmentation requires a large number of pixel-level masks, which involves a large amount of human labour and time for annotation. In contrast, image-level labels are much easier to obtain. In this work, we propose a novel method for weakly supervised semantic segmentation with only image-level labels. The method relies on a large scale co-segmentation framework that can produce object masks for a group of images containing objects belonging to the same semantic class. We first retrieve images from search engines, e.g. Flickr and Google, using semantic class names as queries, e.g. class names in PASCAL VOC 2012. We then use high quality masks produced by co-segmentation on the retrieved images as well as the target dataset images with image level labels to train segmentation networks. We obtain IoU 56.9 on test set of PASCAL VOC 2012, which reaches state of the art performance.",
"title": ""
},
{
"docid": "363872994876ab6c68584d4f31913b43",
"text": "The Internet is quickly becoming the world’s largest public electronic marketplace. It is estimated to reach 50 million people worldwide, with growth estimates averaging approximately 10% per month. Innovative business professionals have discovered that the Internet can A BUYER’S-EYE VIEW OF ONLINE PURCHASING WORRIES. • H U A I Q I N G W A N G , M A T T H E W K . O . L E E , A N D C H E N W A N G •",
"title": ""
},
{
"docid": "0d9420b97012ce445fdf39fb009e32c4",
"text": "Greater numbers of young children with complicated, serious physical health, mental health, or developmental problems are entering foster care during the early years when brain growth is most active. Every effort should be made to make foster care a positive experience and a healing process for the child. Threats to a child’s development from abuse and neglect should be understood by all participants in the child welfare system. Pediatricians have an important role in assessing the child’s needs, providing comprehensive services, and advocating on the child’s behalf. The developmental issues important for young children in foster care are reviewed, including: 1) the implications and consequences of abuse, neglect, and placement in foster care on early brain development; 2) the importance and challenges of establishing a child’s attachment to caregivers; 3) the importance of considering a child’s changing sense of time in all aspects of the foster care experience; and 4) the child’s response to stress. Additional topics addressed relate to parental roles and kinship care, parent-child contact, permanency decision-making, and the components of comprehensive assessment and treatment of a child’s development and mental health needs. More than 500 000 children are in foster care in the United States.1,2 Most of these children have been the victims of repeated abuse and prolonged neglect and have not experienced a nurturing, stable environment during the early years of life. Such experiences are critical in the shortand long-term development of a child’s brain and the ability to subsequently participate fully in society.3–8 Children in foster care have disproportionately high rates of physical, developmental, and mental health problems1,9 and often have many unmet medical and mental health care needs.10 Pediatricians, as advocates for children and their families, have a special responsibility to evaluate and help address these needs. Legal responsibility for establishing where foster children live and which adults have custody rests jointly with the child welfare and judiciary systems. Decisions about assessment, care, and planning should be made with sufficient information about the particular strengths and challenges of each child. Pediatricians have an important role in helping to develop an accurate, comprehensive profile of the child. To create a useful assessment, it is imperative that complete health and developmental histories are available to the pediatrician at the time of these evaluations. Pediatricians and other professionals with expertise in child development should be proactive advisors to child protection workers and judges regarding the child’s needs and best interests, particularly regarding issues of placement, permanency planning, and medical, developmental, and mental health treatment plans. For example, maintaining contact between children and their birth families is generally in the best interest of the child, and such efforts require adequate support services to improve the integrity of distressed families. However, when keeping a family together may not be in the best interest of the child, alternative placement should be based on social, medical, psychological, and developmental assessments of each child and the capabilities of the caregivers to meet those needs. Health care systems, social services systems, and judicial systems are frequently overwhelmed by their responsibilities and caseloads. Pediatricians can serve as advocates to ensure each child’s conditions and needs are evaluated and treated properly and to improve the overall operation of these systems. Availability and full utilization of resources ensure comprehensive assessment, planning, and provision of health care. Adequate knowledge about each child’s development supports better placement, custody, and treatment decisions. Improved programs for all children enhance the therapeutic effects of government-sponsored protective services (eg, foster care, family maintenance). The following issues should be considered when social agencies intervene and when physicians participate in caring for children in protective services. EARLY BRAIN AND CHILD DEVELOPMENT More children are entering foster care in the early years of life when brain growth and development are most active.11–14 During the first 3 to 4 years of life, the anatomic brain structures that govern personality traits, learning processes, and coping with stress and emotions are established, strengthened, and made permanent.15,16 If unused, these structures atrophy.17 The nerve connections and neurotransmitter networks that are forming during these critical years are influenced by negative environmental conditions, including lack of stimulation, child abuse, or violence within the family.18 It is known that emotional and cognitive disruptions in the early lives of children have the potential to impair brain development.18 Paramount in the lives of these children is their need for continuity with their primary attachment figures and a sense of permanence that is enhanced The recommendations in this statement do not indicate an exclusive course of treatment or serve as a standard of medical care. Variations, taking into account individual circumstances, may be appropriate. PEDIATRICS (ISSN 0031 4005). Copyright © 2000 by the American Acad-",
"title": ""
},
{
"docid": "5d98548bc4f65d66a8ece7e70cb61bc4",
"text": "0140-3664/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.comcom.2011.09.003 ⇑ Corresponding author. Tel.: +86 10 62283240. E-mail address: liwenmin02@hotmail.com (W. Li). Value-added applications in vehicular ad hoc network (VANET) come with the emergence of electronic trading. The restricted connectivity scenario in VANET, where the vehicle cannot communicate directly with the bank for authentication due to the lack of internet access, opens up new security challenges. Hence a secure payment protocol, which meets the additional requirements associated with VANET, is a must. In this paper, we propose an efficient and secure payment protocol that aims at the restricted connectivity scenario in VANET. The protocol applies self-certified key agreement to establish symmetric keys, which can be integrated with the payment phase. Thus both the computational cost and communication cost can be reduced. Moreover, the protocol can achieve fair exchange, user anonymity and payment security. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "64de7935c22f74069721ff6e66a8fe8c",
"text": "In the setting of secure multiparty computation, a set of n parties with private inputs wish to jointly compute some functionality of their inputs. One of the most fundamental results of secure computation was presented by Ben-Or, Goldwasser, and Wigderson (BGW) in 1988. They demonstrated that any n-party functionality can be computed with perfect security, in the private channels model. When the adversary is semi-honest, this holds as long as $$t<n/2$$ t < n / 2 parties are corrupted, and when the adversary is malicious, this holds as long as $$t<n/3$$ t < n / 3 parties are corrupted. Unfortunately, a full proof of these results was never published. In this paper, we remedy this situation and provide a full proof of security of the BGW protocol. This includes a full description of the protocol for the malicious setting, including the construction of a new subprotocol for the perfect multiplication protocol that seems necessary for the case of $$n/4\\le t<n/3$$ n / 4 ≤ t < n / 3 .",
"title": ""
},
{
"docid": "9b19f343a879430283881a69e3f9cb78",
"text": "Effective analysis of applications (shortly apps) is essential to understanding apps' behavior. Two analysis approaches, i.e., static and dynamic, are widely used; although, both have well known limitations. Static analysis suffers from obfuscation and dynamic code updates. Whereas, it is extremely hard for dynamic analysis to guarantee the execution of all the code paths in an app and thereby, suffers from the code coverage problem. However, from a security point of view, executing all paths in an app might be less interesting than executing certain potentially malicious paths in the app. In this work, we use a hybrid approach that combines static and dynamic analysis in an iterative manner to cover their shortcomings. We use targeted execution of interesting code paths to solve the issues of obfuscation and dynamic code updates. Our targeted execution leverages a slicing-based analysis for the generation of data-dependent slices for arbitrary methods of interest (MOI) and on execution of the extracted slices for capturing their dynamic behavior. Motivated by the fact that malicious apps use Inter Component Communications (ICC) to exchange data [19], our main contribution is the automatic targeted triggering of MOI that use ICC for passing data between components. We implement a proof of concept, TelCC, and report the results of our evaluation.",
"title": ""
},
{
"docid": "04d8cd068da3aa0a7ede285de372a139",
"text": "Testing is a major cost factor in software development. Test automation has been proposed as one solution to reduce these costs. Test automation tools promise to increase the number of tests they run and the frequency at which they run them. So why not automate every test? In this paper we discuss the question \"When should a test be automated?\" and the trade-off between automated and manual testing. We reveal problems in the overly simplistic cost models commonly used to make decisions about automating testing. We introduce an alternative model based on opportunity cost and present influencing factors on the decision of whether or not to invest in test automation. Our aim is to stimulate discussion about these factors as well as their influence on the benefits and costs of automated testing in order to support researchers and practitioners reflecting on proposed automation approaches.",
"title": ""
}
] | scidocsrr |
11e33996f932f4f0c48c24112e1866f5 | Extraction of Web News from Web Pages Using a Ternary Tree Approach | [
{
"docid": "40e9a5fcc3eaf85840a45dff8a09aec1",
"text": "Web data extractors are used to extract data from web documents in order to feed automated processes. In this article, we propose a technique that works on two or more web documents generated by the same server-side template and learns a regular expression that models it and can later be used to extract data from similar documents. The technique builds on the hypothesis that the template introduces some shared patterns that do not provide any relevant data and can thus be ignored. We have evaluated and compared our technique to others in the literature on a large collection of web documents; our results demonstrate that our proposal performs better than the others and that input errors do not have a negative impact on its effectiveness; furthermore, its efficiency can be easily boosted by means of a couple of parameters, without sacrificing its effectiveness.",
"title": ""
},
{
"docid": "060a024416dd983e226d5318789337a7",
"text": "Extracting information from web documents has become a research area in which new proposals sprout out year after year. This has motivated several researchers to work on surveys that attempt to provide an overall picture of the many existing proposals. Unfortunately, none of these surveys provide a complete picture, because they do not take region extractors into account. These tools are kind of preprocessors, because they help information extractors focus on the regions of a web document that contain relevant information. With the increasing complexity of web documents, region extractors are becoming a must to extract information from many websites. Beyond information extraction, region extractors have also found their way into information retrieval, focused web crawling, topic distillation, adaptive content delivery, mashups, and metasearch engines. In this paper, we survey the existing proposals regarding region extractors and compare them side by side.",
"title": ""
},
{
"docid": "351969655fca37f1d3256481ab037e87",
"text": "Many Web news sites have similar structures and layout styles. Our extensive case studies have indicated that there exists potential relevance between Web content layouts and path patterns. Compared with the delimiting features of Web content, path patterns have many advantages, such as a high positioning accuracy, ease of use and a strong pervasive performance. Consequently, a Web information extraction model with path patterns constructed from a path pattern mining algorithm is proposed in this paper. Our experimental data set is obtained by randomly selecting news Web pages from the CNN website. With a reasonable tolerance threshold, the experimental results show that the average precision is above 99% and the average recall is 100% when we integrate Web information extraction with our path pattern mining algorithm. The performance of path patterns from the pattern mining algorithm is much better than that of priori extraction rules configured by domain knowledge.",
"title": ""
}
] | [
{
"docid": "35981768a2a46c2dd9d52ebbd5b63750",
"text": "A vehicle detection and classification system has been developed based on a low-cost triaxial anisotropic magnetoresistive sensor. Considering the characteristics of vehicle magnetic detection signals, especially the signals for low-speed congested traffic in large cities, a novel fixed threshold state machine algorithm based on signal variance is proposed to detect vehicles within a single lane and segment the vehicle signals effectively according to the time information of vehicles entering and leaving the sensor monitoring area. In our experiments, five signal features are extracted, including the signal duration, signal energy, average energy of the signal, ratio of positive and negative energy of x-axis signal, and ratio of positive and negative energy of y-axis signal. Furthermore, the detected vehicles are classified into motorcycles, two-box cars, saloon cars, buses, and Sport Utility Vehicle commercial vehicles based on a classification tree model. The experimental results have shown that the detection accuracy of the proposed algorithm can reach up to 99.05% and the average classification accuracy is 93.66%, which verify the effectiveness of our algorithm for low-speed congested traffic.",
"title": ""
},
{
"docid": "aa2b1a8d0cf511d5862f56b47d19bc6a",
"text": "DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:",
"title": ""
},
{
"docid": "ea1a56c7bcf4871d1c6f2f9806405827",
"text": "—Prior to the successful use of non-contact photoplethysmography, several engineering issues regarding this monitoring technique must be considered. These issues include ambient light and motion artefacts, the wide dynamic signal range and the effect of direct light source coupling. The latter issue was investigated and preliminary results show that direct coupling can cause attenuation of the detected PPG signal. It is shown that a physical offset can be introduced between the light source and the detector in order to reduce this effect.",
"title": ""
},
{
"docid": "7c287295e022480314d8a2627cd12cef",
"text": "The causal role of human papillomavirus infections in cervical cancer has been documented beyond reasonable doubt. The association is present in virtually all cervical cancer cases worldwide. It is the right time for medical societies and public health regulators to consider this evidence and to define its preventive and clinical implications. A comprehensive review of key studies and results is presented.",
"title": ""
},
{
"docid": "dd975fded3a24052a31bb20587ff8566",
"text": "This paper presents a design methodology for a high power density converter, which emphasizes weight minimization. The design methodology considers various inverter topologies and semiconductor devices with application of cold plate cooling and LCL filter. Design for a high-power inverter is evaluated with demonstration of a 50 kVA 2-level 3-phase SiC inverter operating at 60 kHz switching frequency. The prototype achieves high gravimetric power density of 6.49 kW/kg.",
"title": ""
},
{
"docid": "1f9bf4526e7e58494242ddce17f6c756",
"text": "Consider the following generalization of the classical job-shop scheduling problem in which a set of machines is associated with each operation of a job. The operation can be processed on any of the machines in this set. For each assignment μ of operations to machines letP(μ) be the corresponding job-shop problem andf(μ) be the minimum makespan ofP(μ). How to find an assignment which minimizesf(μ)? For problems with two jobs a polynomial algorithm is derived. Folgende Verallgemeinerung des klassischen Job-Shop Scheduling Problems wird untersucht. Jeder Operation eines Jobs sei eine Menge von Maschinen zugeordnet. Wählt man für jede Operation genau eine Maschine aus dieser Menge aus, so erhält man ein klassisches Job-Shop Problem, dessen minimale Gesamtbearbeitungszeitf(μ) von dieser Zuordnung μ abhängt. Gesucht ist eine Zuordnung μ, dief(μ) minimiert. Für zwei Jobs wird ein polynomialer Algorithmus entwickelt, der dieses Problem löst.",
"title": ""
},
{
"docid": "2c0a4b5c819a8fcfd5a9ab92f59c311e",
"text": "Line starting capability of Synchronous Reluctance Motors (SynRM) is a crucial challenge in their design that if solved, could lead to a valuable category of motors. In this paper, the so-called crawling effect as a potential problem in Line-Start Synchronous Reluctance Motors (LS-SynRM) is analyzed. Two interfering scenarios on LS-SynRM start-up are introduced and one of them is treated in detail by constructing the asynchronous model of the motor. In the third section, a definition of this phenomenon is given utilizing a sample cage configuration. The LS-SynRM model and characteristics are compared with that of a reference induction motor (IM) in all sections of this work to convey a better perception of successful and unsuccessful synchronization consequences to the reader. Several important post effects of crawling on motor performance are discussed in the rest of the paper to evaluate how it would influence the motor operation. All simulations have been performed using Finite Element Analysis (FEA).",
"title": ""
},
{
"docid": "7487f889eae6a32fc1afab23e54de9b8",
"text": "Although many researchers have investigated the use of different powertrain topologies, component sizes, and control strategies in fuel-cell vehicles, a detailed parametric study of the vehicle types must be conducted before a fair comparison of fuel-cell vehicle types can be performed. This paper compares the near-optimal configurations for three topologies of vehicles: fuel-cell-battery, fuel-cell-ultracapacitor, and fuel-cell-battery-ultracapacitor. The objective function includes performance, fuel economy, and powertrain cost. The vehicle models, including detailed dc/dc converter models, are programmed in Matlab/Simulink for the customized parametric study. A controller variable for each vehicle type is varied in the optimization.",
"title": ""
},
{
"docid": "f3cb6de57ba293be0b0833a04086b2ce",
"text": "Due to increasing globalization, urban societies are becoming more multicultural. The availability of large-scale digital mobility traces e.g. from tweets or checkins provides an opportunity to explore multiculturalism that until recently could only be addressed using survey-based methods. In this paper we examine a basic facet of multiculturalism through the lens of language use across multiple cities in Switzerland. Using data obtained from Foursquare over 330 days, we present a descriptive analysis of linguistic differences and similarities across five urban agglomerations in a multicultural, western European country.",
"title": ""
},
{
"docid": "659eea2d34037b6c72728c9149247218",
"text": "Deep learning approaches to breast cancer detection in mammograms have recently shown promising results. However, such models are constrained by the limited size of publicly available mammography datasets, in large part due to privacy concerns and the high cost of generating expert annotations. Limited dataset size is further exacerbated by substantial class imbalance since “normal” images dramatically outnumber those with findings. Given the rapid progress of generative models in synthesizing realistic images, and the known effectiveness of simple data augmentation techniques (e.g. horizontal flipping), we ask if it is possible to synthetically augment mammogram datasets using generative adversarial networks (GANs). We train a class-conditional GAN to perform contextual in-filling, which we then use to synthesize lesions onto healthy screening mammograms. First, we show that GANs are capable of generating high-resolution synthetic mammogram patches. Next, we experimentally evaluate using the augmented dataset to improve breast cancer classification performance. We observe that a ResNet-50 classifier trained with GAN-augmented training data produces a higher AUROC compared to the same model trained only on traditionally augmented data, demonstrating the potential of our approach.",
"title": ""
},
{
"docid": "b7d61816af1dd409e8474cf97fa15b4f",
"text": "This paper presents the detailed circuit operation, mathematical analysis, and design example of the active clamp flyback converter. The auxiliary switch and clamp capacitor are used in the flyback converter to recycle the energy stored in the transformer leakage in order to minimize the spike voltage at the transformer primary side. Therefore the voltage stress of main switch can be reduced. The active clamped circuit can also help the main switch to turn on at ZVS using the switch output capacitor and transformer leakage inductance. First the circuit operation and mathematical analysis are provided. The design example of active clamp flyback converter is also presented. Finally the experimental results based on a 120 W prototype circuit are provided to verify the system performance",
"title": ""
},
{
"docid": "f1910095f08fc72f81c39cc01890c474",
"text": "In today’s competitive business environment, there is a strong need for businesses to collect, monitor, and analyze user-generated data on their own and on their competitors’ social media sites, such as Facebook, Twitter, and blogs. To achieve a competitive advantage, it is often necessary to listen to and understand what customers are saying about competitors’ products and services. Current social media analytics frameworks do not provide benchmarks that allow businesses to compare customer sentiment on social media to easily understand where businesses are doing well and where they need to improve. In this paper, we present a social media competitive analytics framework with sentiment benchmarks that can be used to glean industry-specific marketing intelligence. Based on the idea of the proposed framework, new social media competitive analytics with sentiment benchmarks can be developed to enhance marketing intelligence and to identify specific actionable areas in which businesses are leading and lagging to further improve their customers’ experience using customer opinions gleaned from social media. Guided by the proposed framework, an innovative business-driven social media competitive analytics tool named VOZIQ is developed. We use VOZIQ to analyze tweets associated with five large retail sector companies and to generate meaningful business insight reports. 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1d88a06a34beff2c3e926a6d24f70036",
"text": "Graph-based clustering methods perform clustering on a fixed input data graph. If this initial construction is of low quality then the resulting clustering may also be of low quality. Moreover, existing graph-based clustering methods require post-processing on the data graph to extract the clustering indicators. We address both of these drawbacks by allowing the data graph itself to be adjusted as part of the clustering procedure. In particular, our Constrained Laplacian Rank (CLR) method learns a graph with exactly k connected components (where k is the number of clusters). We develop two versions of this method, based upon the L1-norm and the L2-norm, which yield two new graph-based clustering objectives. We derive optimization algorithms to solve these objectives. Experimental results on synthetic datasets and real-world benchmark datasets exhibit the effectiveness of this new graph-based clustering method. Introduction State-of-the art clustering methods are often based on graphical representations of the relationships among data points. For example, spectral clustering (Ng, Jordan, and Weiss 2001), normalized cut (Shi and Malik 2000) and ratio cut (Hagen and Kahng 1992) all transform the data into a weighted, undirected graph based on pairwise similarities. Clustering is then accomplished by spectral or graphtheoretic optimization procedures. See (Ding and He 2005; Li and Ding 2006) for a discussion of the relations among these graph-based methods, and also the connections to nonnegative matrix factorization. All of these methods involve a two-stage process in which an data graph is formed from the data, and then various optimization procedures are invoked on this fixed input data graph. A disadvantage of this two-stage process is that the final clustering structures are not represented explicitly in the data graph (e.g., graph-cut methods often use K-means algorithm to post-process the ∗To whom all correspondence should be addressed. This work was partially supported by US NSF-IIS 1117965, NSFIIS 1302675, NSF-IIS 1344152, NSF-DBI 1356628, NIH R01 AG049371. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. results to get the clustering indicators); also, the clustering results are dependent on the quality of the input data graph (i.e., they are sensitive to the particular graph construction methods). It seems plausible that a strategy in which the optimization phase is allowed to change the data graph could have advantages relative to the two-phase strategy. In this paper we propose a novel graph-based clustering model that learns a graph with exactly k connected components (where k is the number of clusters). In our new model, instead of fixing the input data graph associated to the affinity matrix, we learn a new data similarity matrix that is a block diagonal matrix and has exactly k connected components—the k clusters. Thus, our new data similarity matrix is directly useful for the clustering task; the clustering results can be immediately obtained without requiring any post-processing to extract the clustering indicators. To achieve such ideal clustering structures, we impose a rank constraint on the Laplacian graph of the new data similarity matrix, thereby guaranteeing the existence of exactly k connected components. Considering both L2-norm and L1norm objectives, we propose two new clustering objectives and derive optimization algorithms to solve them. We also introduce a novel graph-construction method to initialize the graph associated with the affinity matrix. We conduct empirical studies on simulated datasets and seven real-world benchmark datasets to validate our proposed methods. The experimental results are promising— we find that our new graph-based clustering method consistently outperforms other related methods in most cases. Notation: Throughout the paper, all the matrices are written as uppercase. For a matrix M , the i-th row and the ij-th element of M are denoted by mi and mij , respectively. The trace of matrix M is denoted by Tr(M). The L2-norm of vector v is denoted by ‖v‖2, the Frobenius and the L1 norm of matrix M are denoted by ‖M‖F and ‖M‖1, respectively. New Clustering Formulations Graph-based clustering approaches typically optimize their objectives based on a given data graph associated with an affinity matrix A ∈ Rn×n (which can be symmetric or nonsymmetric), where n is the number of nodes (data points) in the graph. There are two drawbacks with these approaches: (1) the clustering performance is sensitive to the quality of the data graph construction; (2) the cluster structures are not explicit in the clustering results and a post-processing step is needed to uncover the clustering indicators. To address these two challenges, we aim to learn a new data graph S based on the given data graph A such that the new data graph is more suitable for the clustering task. In our strategy, we propose to learn a new data graph S that has exactly k connected components, where k is the number of clusters. In order to formulate a clustering objective based on this strategy, we start from the following theorem. If the affinity matrix A is nonnegative, then the Laplacian matrix LA = DA − (A + A)/2, where the degree matrix DA ∈ Rn×n is defined as a diagonal matrix whose i-th diagonal element is ∑ j(aij + aji)/2, has the following important property (Mohar 1991; Chung 1997): Theorem 1 The multiplicity k of the eigenvalue zero of the Laplacian matrix LA is equal to the number of connected components in the graph associated with A. Given a graph with affinity matrix A, Theorem 1 indicates that if rank(LA) = n − k, then the graph is an ideal graph based on which we already partition the data points into k clusters, without the need of performing K-means or other discretization procedures as is necessary with traditional graph-based clustering methods such as spectral clustering. Motivated by Theorem 1, given an initial affinity matrix A ∈ Rn×n, we learn a similarity matrix S ∈ Rn×n such that the corresponding Laplacian matrix LS = DS−(S+S)/2 is constrained to be rank(LS) = n − k. Under this constraint, the learned S is block diagonal with proper permutation, and thus we can directly partition the data points into k clusters based on S (Nie, Wang, and Huang 2014). To avoid the case that some rows of S are all zeros, we further constrain the S such that the sum of each row of S is one. Under these constraints, we learn that S that best approximates the initial affinity matrixA. Considering two different distances, the L2-norm and the L1-norm, between the given affinity matrix A and the learned similarity matrix S, we define the Constrained Laplacian Rank (CLR) for graph-based clustering as the solution to the following optimization problem: JCLR L2 = min ∑ j sij=1,sij≥0,rank(LS)=n−k ‖S −A‖2F (1) JCLR L1 = min ∑ j sij=1,sij≥0,rank(LS)=n−k ‖S −A‖1. (2) These problems seem very difficult to solve since LS = DS − (S +S)/2, and DS also depends on S, and the constraint rank(LS) = n−k is a complex nonlinear constraint. In the next section, we will propose novel and efficient algorithms to solve these problems. Optimization Algorithms Optimization Algorithm for Solving JCLR L2 in Eq. (1) Let σi(LS) denote the i-th smallest eigenvalue of LS . Note that σi(LS) ≥ 0 because LS is positive semidefinite. The problem (1) is equivalent to the following problem for a large enough value of λ: min ∑ j sij=1,sij≥0 ‖S −A‖2F + 2λ k ∑",
"title": ""
},
{
"docid": "292981db9a4f16e4ba7e02303cbee6c1",
"text": "The millimeter wave frequency spectrum offers unprecedented bandwidths for future broadband cellular networks. This paper presents the world's first empirical measurements for 28 GHz outdoor cellular propagation in New York City. Measurements were made in Manhattan for three different base station locations and 75 receiver locations over distances up to 500 meters. A 400 megachip-per-second channel sounder and directional horn antennas were used to measure propagation characteristics for future mm-wave cellular systems in urban environments. This paper presents measured path loss as a function of the transmitter - receiver separation distance, the angular distribution of received power using directional 24.5 dBi antennas, and power delay profiles observed in New York City. The measured data show that a large number of resolvable multipath components exist in both non line of sight and line of sight environments, with observed multipath excess delay spreads (20 dB) as great as 1388.4 ns and 753.5 ns, respectively. The widely diverse spatial channels observed at any particular location suggest that millimeter wave mobile communication systems with electrically steerable antennas could exploit resolvable multipath components to create viable links for cell sizes on the order of 200 m.",
"title": ""
},
{
"docid": "9544b2cc301e2e3f170f050de659dda4",
"text": "In SDN, the underlying infrastructure is usually abstracted for applications that can treat the network as a logical or virtual entity. Commonly, the ``mappings\" between virtual abstractions and their actual physical implementations are not one-to-one, e.g., a single \"big switch\" abstract object might be implemented using a distributed set of physical devices. A key question is, what abstractions could be mapped to multiple physical elements while faithfully preserving their native semantics? E.g., can an application developer always expect her abstract \"big switch\" to act exactly as a physical big switch, despite being implemented using multiple physical switches in reality?\n We show that the answer to that question is \"no\" for existing virtual-to-physical mapping techniques: behavior can differ between the virtual \"big switch\" and the physical network, providing incorrect application-level behavior. We also show that that those incorrect behaviors occur despite the fact that the most pervasive and commonly-used correctness invariants, such as per-packet consistency, are preserved throughout. These examples demonstrate that for practical notions of correctness, new systems and a new analytical framework are needed. We take the first steps by defining end-to-end correctness, a correctness condition that focuses on applications only, and outline a research vision to obtain virtualization systems with correct virtual to physical mappings.",
"title": ""
},
{
"docid": "667837818361e277cee0995308e69d6d",
"text": "We aim to obtain an interpretable, expressive, and disentangled scene representation that contains comprehensive structural and textural information for each object. Previous scene representations learned by neural networks are often uninterpretable, limited to a single object, or lacking 3D knowledge. In this work, we propose 3D scene de-rendering networks (3D-SDN) to address the above issues by integrating disentangled representations for semantics, geometry, and appearance into a deep generative model. Our scene encoder performs inverse graphics, translating a scene into a structured object-wise representation. Our decoder has two components: a differentiable shape renderer and a neural texture generator. The disentanglement of semantics, geometry, and appearance supports 3D-aware scene manipulation, e.g., rotating and moving objects freely while keeping the consistent shape and texture, and changing the object appearance without affecting its shape. Experiments demonstrate that our editing scheme based on 3D-SDN is superior to its 2D counterpart.",
"title": ""
},
{
"docid": "2e99cd85bb172d545648f18a76a0ff14",
"text": "In this work, the use of type-2 fuzzy logic systems as a novel approach for predicting permeability from well logs has been investigated and implemented. Type-2 fuzzy logic system is good in handling uncertainties, including uncertainties in measurements and data used to calibrate the parameters. In the formulation used, the value of a membership function corresponding to a particular permeability value is no longer a crisp value; rather, it is associated with a range of values that can be characterized by a function that reflects the level of uncertainty. In this way, the model will be able to adequately account for all forms of uncertainties associated with predicting permeability from well log data, where uncertainties are very high and the need for stable results are highly desirable. Comparative studies have been carried out to compare the performance of the proposed type-2 fuzzy logic system framework with those earlier used methods, using five different industrial reservoir data. Empirical results from simulation show that type-2 fuzzy logic approach outperformed others in general and particularly in the area of stability and ability to handle data in uncertain situations, which are common characteristics of well logs data. Another unique advantage of the newly proposed model is its ability to generate, in addition to the normal target forecast, prediction intervals as its by-products without extra computational cost. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "bab246f8b15931501049862066fde77f",
"text": "The upcoming Internet of Things will introduce large sensor networks including devices with very different propagation characteristics and power consumption demands. 5G aims to fulfill these requirements by demanding a battery lifetime of at least 10 years. To integrate smart devices that are located in challenging propagation conditions, IoT communication technologies furthermore have to support very deep coverage. NB-IoT and eMTC are designed to meet these requirements and thus paving the way to 5G. With the power saving options extended Discontinuous Reception and Power Saving Mode as well as the usage of large numbers of repetitions, NB-IoT and eMTC introduce new techniques to meet the 5G IoT requirements. In this paper, the performance of NB-IoT and eMTC is evaluated. Therefore, data rate, power consumption, latency and spectral efficiency are examined in different coverage conditions. Although both technologies use the same power saving techniques as well as repetitions to extend the communication range, the analysis reveals a different performance in the context of data size, rate and coupling loss. While eMTC comes with a 4% better battery lifetime than NB-IoT when considering 144 dB coupling loss, NB-IoT battery lifetime raises to 18% better performance in 164 dB coupling loss scenarios. The overall analysis shows that in coverage areas with a coupling loss of 155 dB or less, eMTC performs better, but requires much more bandwidth. Taking the spectral efficiency into account, NB-IoT is in all evaluated scenarios the better choice and more suitable for future networks with massive numbers of devices.",
"title": ""
},
{
"docid": "4859e7f8bfc31401e19e360386867ae2",
"text": "Health data is important as it provides an individual with knowledge of the factors needed to be improved for oneself. The development of fitness trackers and their associated software aid consumers to understand the manner in which they may improve their physical wellness. These devices are capable of collecting health data for a consumer such sleeping patterns, heart rate readings or the number of steps taken by an individual. Although, this information is very beneficial to guide a consumer to a better healthier state, it has been identified that they have privacy and security concerns. Privacy and Security are of great concern for fitness trackers and their associated applications as protecting health data is of critical importance. This is so, as health data is one of the highly sort after information by cyber criminals. Fitness trackers and their associated applications have been identified to contain privacy and security concerns that places the health data of consumers at risk to intruders. As the study of Consumer Health continues to grow it is vital to understand the elements that are needed to better protect the health information of a consumer. This research paper therefore provides a conceptual threat assessment framework that can be used to identify the elements needed to better secure Consumer Health Wearables. These elements consist of six core elements from the CIA triad and Microsoft STRIDE framework. Fourteen vulnerabilities were further discovered that were classified within these six core elements. Through this, better guidance can be achieved to improve the privacy and security of Consumer Health Wearables.",
"title": ""
},
{
"docid": "f70cea53fb4bb6d9cc98bd6dd7a96c88",
"text": "During maintenance, it is common to run the new version of a program against its existing test suite to check whether the modifications in the program introduced unforeseen side effects. Although this kind of regression testing can be effective in identifying some change-related faults, it is limited by the quality of the existing test suite. Because generating tests for real programs is expensive, developers build test suites by finding acceptable tradeoffs between cost and thoroughness of the tests. Such test suites necessarily target only a small subset of the program's functionality and may miss many regression faults. To address this issue, we introduce the concept of behavioral regression testing, whose goal is to identify behavioral differences between two versions of a program through dynamic analysis. Intuitively, given a set of changes in the code, behavioral regression testing works by (1) generating a large number of test cases that focus on the changed parts of the code, (2) running the generated test cases on the old and new versions of the code and identifying differences in the tests' outcome, and (3) analyzing the identified differences and presenting them to the developers. By focusing on a subset of the code and leveraging differential behavior, our approach can provide developers with more (and more focused) information than traditional regression testing techniques. This paper presents our approach and performs a preliminary assessment of its feasibility.",
"title": ""
}
] | scidocsrr |
42557afb223c11fb89eb19dc57f28634 | AVID: Adversarial Visual Irregularity Detection | [
{
"docid": "54d3d5707e50b979688f7f030770611d",
"text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.",
"title": ""
},
{
"docid": "6470b7d1532012e938063d971f3ead29",
"text": "As society continues to accumulate more and more data, demand for machine learning algorithms that can learn from data with limited human intervention only increases. Semi-supervised learning (SSL) methods, which extend supervised learning algorithms by enabling them to use unlabeled data, play an important role in addressing this challenge. In this thesis, a framework unifying the traditional assumptions and approaches to SSL is defined. A synthesis of SSL literature then places a range of contemporary approaches into this common framework. Our focus is on methods which use generative adversarial networks (GANs) to perform SSL. We analyse in detail one particular GAN-based SSL approach. This is shown to be closely related to two preceding approaches. Through synthetic experiments we provide an intuitive understanding and motivate the formulation of our focus approach. We then theoretically analyse potential alternative formulations of its loss function. This analysis motivates a number of research questions that centre on possible improvements to, and experiments to better understand the focus model. While we find support for our hypotheses, our conclusion more broadly is that the focus method is not especially robust.",
"title": ""
},
{
"docid": "e9af5e2bfc36dd709ae6feefc4c38976",
"text": "Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles that combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy, and optimization function. In this paper, we provide a review of deep learning-based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely, the convolutional neural network. Then, we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection, and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network-based learning systems.",
"title": ""
}
] | [
{
"docid": "de016ffaace938c937722f8a47cc0275",
"text": "Conventional traffic light detection methods often suffers from false positives in urban environment because of the complex backgrounds. To overcome such limitation, this paper proposes a method that combines a conventional approach, which is fast but weak to false positives, and a DNN, which is not suitable for detecting small objects but a very powerful classifier. Experiments on real data showed promising results.",
"title": ""
},
{
"docid": "39ee9e4c7dad30d875d70e0a41a37034",
"text": "The aim of the present study is to investigate the effect of daily injection of ginger Zingiber officinale extract on the physiological parameters, as well as the histological structure of the l iver of adult rats. Adult male rats were divided into four groups; (G1, G2, G3, and Control groups). The first group received 500 ml/kg b. wt/day of aqueous extract of Zingiber officinale i.p. for four weeks, G2 received 500 ml/kg b wt/day of aqueous extract of Zingiber officinale for three weeks and then received carbon tetrachloride CCl4 0.1ml/150 g b. wt. for one week, G3 received 500 ml/kg body weight/day of aqueous extract of ginger Zingiber officinale i .p. for three weeks and then received CCl4 for one week combined with ginger). The control group (C) received a 500 ml/kg B WT/day of saline water i.p. for four weeks. The results indicated a significant decrease in the total protein and increase in the albumin/globulin ratio in the third group compared with first and second group. Also, the results reported a significant decrease in the body weight in the third and the fourth groups compared with the first and the second groups. A significant decrease in the globulin levels in the third and the fourth groups were detected compared with the first and the second groups. The obtained results showed that treating rats with ginger improved the histopathological changes induced in the liver by CCl4. The study suggests that ginger extract can be used as antioxidant, free radical scavenging and protective action against carbon tetrachloride oxidative damage in the l iver.",
"title": ""
},
{
"docid": "10f2726026dbe1deac859715f57b15b6",
"text": "Monte-Carlo Tree Search, especially UCT and its POMDP version POMCP, have demonstrated excellent performance on many problems. However, to efficiently scale to large domains one should also exploit hierarchical structure if present. In such hierarchical domains, finding rewarded states typically requires to search deeply; covering enough such informative states very far from the root becomes computationally expensive in flat non-hierarchical search approaches. We propose novel, scalable MCTS methods which integrate a task hierarchy into the MCTS framework, specifically leading to hierarchical versions of both, UCT and POMCP. The new method does not need to estimate probabilistic models of each subtask, it instead computes subtask policies purely sample-based. We evaluate the hierarchical MCTS methods on various settings such as a hierarchical MDP, a Bayesian model-based hierarchical RL problem, and a large hierarchi-",
"title": ""
},
{
"docid": "c57fa27a4745e3a5440bd7209cf109a2",
"text": "OBJECTIVES\nWe sought to use natural language processing to develop a suite of language models to capture key symptoms of severe mental illness (SMI) from clinical text, to facilitate the secondary use of mental healthcare data in research.\n\n\nDESIGN\nDevelopment and validation of information extraction applications for ascertaining symptoms of SMI in routine mental health records using the Clinical Record Interactive Search (CRIS) data resource; description of their distribution in a corpus of discharge summaries.\n\n\nSETTING\nElectronic records from a large mental healthcare provider serving a geographic catchment of 1.2 million residents in four boroughs of south London, UK.\n\n\nPARTICIPANTS\nThe distribution of derived symptoms was described in 23 128 discharge summaries from 7962 patients who had received an SMI diagnosis, and 13 496 discharge summaries from 7575 patients who had received a non-SMI diagnosis.\n\n\nOUTCOME MEASURES\nFifty SMI symptoms were identified by a team of psychiatrists for extraction based on salience and linguistic consistency in records, broadly categorised under positive, negative, disorganisation, manic and catatonic subgroups. Text models for each symptom were generated using the TextHunter tool and the CRIS database.\n\n\nRESULTS\nWe extracted data for 46 symptoms with a median F1 score of 0.88. Four symptom models performed poorly and were excluded. From the corpus of discharge summaries, it was possible to extract symptomatology in 87% of patients with SMI and 60% of patients with non-SMI diagnosis.\n\n\nCONCLUSIONS\nThis work demonstrates the possibility of automatically extracting a broad range of SMI symptoms from English text discharge summaries for patients with an SMI diagnosis. Descriptive data also indicated that most symptoms cut across diagnoses, rather than being restricted to particular groups.",
"title": ""
},
{
"docid": "c1538df6d2aa097d5c4a8c4fc7e42d01",
"text": "During the First International EEG Congress, London in 1947, it was recommended that Dr. Herbert H. Jasper study methods to standardize the placement of electrodes used in EEG (Jasper 1958). A report with recommendations was to be presented to the Second International Congress in Paris in 1949. The electrode placement systems in use at various centers were found to be similar, with only minor differences, although their designations, letters and numbers were entirely different. Dr. Jasper established some guidelines which would be established in recommending a speci®c system to the federation and these are listed below.",
"title": ""
},
{
"docid": "adae03c768e3bc72f325075cf22ef7b1",
"text": "The vergence-accommodation conflict (VAC) remains a major problem in head-mounted displays for virtual and augmented reality (VR and AR). In this review, I discuss why this problem is pivotal for nearby tasks in VR and AR, present a comprehensive taxonomy of potential solutions, address advantages and shortfalls of each design, and cover various ways to better evaluate the solutions. The review describes how VAC is addressed in monocular, stereoscopic, and multiscopic HMDs, including retinal scanning and accommodation-free displays. Eye-tracking-based approaches that do not provide natural focal cues-gaze-guided blur and dynamic stereoscopy-are also covered. Promising future research directions in this area are identified.",
"title": ""
},
{
"docid": "e4493c56867bfe62b7a96b33fb171fad",
"text": "In the field of agricultural information, the automatic identification and diagnosis of maize leaf diseases is highly desired. To improve the identification accuracy of maize leaf diseases and reduce the number of network parameters, the improved GoogLeNet and Cifar10 models based on deep learning are proposed for leaf disease recognition in this paper. Two improved models that are used to train and test nine kinds of maize leaf images are obtained by adjusting the parameters, changing the pooling combinations, adding dropout operations and rectified linear unit functions, and reducing the number of classifiers. In addition, the number of parameters of the improved models is significantly smaller than that of the VGG and AlexNet structures. During the recognition of eight kinds of maize leaf diseases, the GoogLeNet model achieves a top - 1 average identification accuracy of 98.9%, and the Cifar10 model achieves an average accuracy of 98.8%. The improved methods are possibly improved the accuracy of maize leaf disease, and reduced the convergence iterations, which can effectively improve the model training and recognition efficiency.",
"title": ""
},
{
"docid": "71ee8396220ce8f3d9c4c6aca650fa42",
"text": "In order to increase our ability to use measurement to support software development practise we need to do more analysis of code. However, empirical studies of code are expensive and their results are difficult to compare. We describe the Qualitas Corpus, a large curated collection of open source Java systems. The corpus reduces the cost of performing large empirical studies of code and supports comparison of measurements of the same artifacts. We discuss its design, organisation, and issues associated with its development.",
"title": ""
},
{
"docid": "23c2ea4422ec6057beb8fa0be12e57b3",
"text": "This study applied logistic regression to model urban growth in the Atlanta Metropolitan Area of Georgia in a GIS environment and to discover the relationship between urban growth and the driving forces. Historical land use/cover data of Atlanta were extracted from the 1987 and 1997 Landsat TM images. Multi-resolution calibration of a series of logistic regression models was conducted from 50 m to 300 m at intervals of 25 m. A fractal analysis pointed to 225 m as the optimal resolution of modeling. The following two groups of factors were found to affect urban growth in different degrees as indicated by odd ratios: (1) population density, distances to nearest urban clusters, activity centers and roads, and high/low density urban uses (all with odds ratios < 1); and (2) distance to the CBD, number of urban cells within a 7 · 7 cell window, bare land, crop/grass land, forest, and UTM northing coordinate (all with odds ratios > 1). A map of urban growth probability was calculated and used to predict future urban patterns. Relative operating characteristic (ROC) value of 0.85 indicates that the probability map is valid. It was concluded that despite logistic regression’s lack of temporal dynamics, it was spatially explicit and suitable for multi-scale analysis, and most importantly, allowed much deeper understanding of the forces driving the growth and the formation of the urban spatial pattern. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3a9bba31f77f4026490d7a0faf4aeaa4",
"text": "We explore several different document representation models and two query expansion models for the task of recommending blogs to a user in response to a query. Blog relevance ranking differs from traditional document ranking in ad-hoc information retrieval in several ways: (1) the unit of output (the blog) is composed of a collection of documents (the blog posts) rather than a single document, (2) the query represents an ongoing – and typically multifaceted – interest in the topic rather than a passing ad-hoc information need and (3) due to the propensity of spam, splogs, and tangential comments, the blogosphere is particularly challenging to use as a source for high-quality query expansion terms. We address these differences at the document representation level, by comparing retrieval models that view either the blog or its constituent posts as the atomic units of retrieval, and at the query expansion level, by making novel use of the links and anchor text in Wikipedia to expand a user’s initial query. We develop two complementary models of blog retrieval that perform at comparable levels of precision and recall. We also show consistent and significant improvement across all models using our Wikipedia expansion strategy.",
"title": ""
},
{
"docid": "6974bf94292b51fc4efd699c28c90003",
"text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.",
"title": ""
},
{
"docid": "07db8f037ff720c8b8b242879c14531f",
"text": "PURPOSE\nMatriptase-2 (also known as TMPRSS6) is a critical regulator of the iron-regulatory hormone hepcidin in the liver; matriptase-2 cleaves membrane-bound hemojuvelin and consequently alters bone morphogenetic protein (BMP) signaling. Hemojuvelin and hepcidin are expressed in the retina and play a critical role in retinal iron homeostasis. However, no information on the expression and function of matriptase-2 in the retina is available. The purpose of the present study was to examine the retinal expression of matriptase-2 and its role in retinal iron homeostasis.\n\n\nMETHODS\nRT-PCR, quantitative PCR (qPCR), and immunofluorescence were used to analyze the expression of matriptase-2 and other iron-regulatory proteins in the mouse retina. Polarized localization of matriptase-2 in the RPE was evaluated using markers for the apical and basolateral membranes. Morphometric analysis of retinas from wild-type and matriptase-2 knockout (Tmprss6(msk/msk) ) mice was also performed. Retinal iron status in Tmprss6(msk/msk) mice was evaluated by comparing the expression levels of ferritin and transferrin receptor 1 between wild-type and knockout mice. BMP signaling was monitored by the phosphorylation status of Smads1/5/8 and expression levels of Id1 while interleukin-6 signaling was monitored by the phosphorylation status of STAT3.\n\n\nRESULTS\nMatriptase-2 is expressed in the mouse retina with expression detectable in all retinal cell types. Expression of matriptase-2 is restricted to the apical membrane in the RPE where hemojuvelin, the substrate for matriptase-2, is also present. There is no marked difference in retinal morphology between wild-type mice and Tmprss6(msk/msk) mice, except minor differences in specific retinal layers. The knockout mouse retina is iron-deficient, demonstrable by downregulation of the iron-storage protein ferritin and upregulation of transferrin receptor 1 involved in iron uptake. Hepcidin is upregulated in Tmprss6(msk/msk) mouse retinas, particularly in the neural retina. BMP signaling is downregulated while interleukin-6 signaling is upregulated in Tmprss6(msk/msk) mouse retinas, suggesting that the upregulaton of hepcidin in knockout mouse retinas occurs through interleukin-6 signaling and not through BMP signaling.\n\n\nCONCLUSIONS\nThe iron-regulatory serine protease matriptase-2 is expressed in the retina, and absence of this enzyme leads to iron deficiency and increased expression of hemojuvelin and hepcidin in the retina. The upregulation of hepcidin expression in Tmprss6(msk/msk) mouse retinas does not occur via BMP signaling but likely via the proinflammatory cytokine interleukin-6. We conclude that matriptase-2 is a critical participant in retinal iron homeostasis.",
"title": ""
},
{
"docid": "f34b41a7f0dd902119197550b9bcf111",
"text": "Tachyzoites, bradyzoites (in tissue cysts), and sporozoites (in oocysts) are the three infectious stages of Toxoplasma gondii. The prepatent period (time to shedding of oocysts after primary infection) varies with the stage of T. gondii ingested by the cat. The prepatent period (pp) after ingesting bradyzoites is short (3-10 days) while it is long (18 days or longer) after ingesting oocysts or tachyzoites, irrespective of the dose. The conversion of bradyzoites to tachyzoites and tachyzoites to bradyzoites is biologically important in the life cycle of T. gondii. In the present paper, the pp was used to study in vivo conversion of tachyzoites to bradyzoites using two isolates, VEG and TgCkAr23. T. gondii organisms were obtained from the peritoneal exudates (pex) of mice inoculated intraperitoneally (i.p.) with these isolates and administered to cats orally by pouring in the mouth or by a stomach tube. In total, 94 of 151 cats shed oocysts after ingesting pex. The pp after ingesting pex was short (5-10 days) in 50 cats, intermediate (11-17) in 30 cats, and long (18 or higher) in 14 cats. The strain of T. gondii (VEG, TgCKAr23) or the stage (bradyzoite, tachyzoite, and sporozoite) used to initiate infection in mice did not affect the results. In addition, six of eight cats fed mice infected 1-4 days earlier shed oocysts with a short pp; the mice had been inoculated i.p. with bradyzoites of the VEG strain and their whole carcasses were fed to cats 1, 2, 3, or 4 days post-infection. Results indicate that bradyzoites may be formed in the peritoneal cavities of mice inoculated intraperitoneally with T. gondii and some bradyzoites might give rise directly to bradyzoites without converting to tachyzoites.",
"title": ""
},
{
"docid": "343f45efbdbf654c421b99927c076c5d",
"text": "As software engineering educators, it is important for us to realize the increasing domain-specificity of software, and incorporate these changes in our design of teaching material. Bioinformatics software is an example of immensely complex and critical scientific software and this domain provides an excellent illustration of the role of computing in the life sciences. To study bioinformatics from a software engineering standpoint, we conducted an exploratory survey of bioinformatics developers. The survey had a range of questions about people, processes and products. We learned that practices like extreme programming, requirements engineering and documentation. As software engineering educators, we realized that the survey results had important implications for the education of bioinformatics professionals. We also investigated the current status of software engineering education in bioinformatics, by examining the curricula of more than fifty bioinformatics programs and the contents of over fifteen textbooks. We observed that there was no mention of the role and importance of software engineering practices essential for creating dependable software systems. Based on our findings and existing literature we present a set of recommendations for improving software engineering education in bioinformatics.",
"title": ""
},
{
"docid": "cc980260540d9e9ae8e7219ff9424762",
"text": "The persuasive design of e-commerce websites has been shown to support people with online purchases. Therefore, it is important to understand how persuasive applications are used and assimilated into e-commerce website designs. This paper demonstrates how the PSD model’s persuasive features could be used to build a bridge supporting the extraction and evaluation of persuasive features in such e-commerce websites; thus practically explaining how feature implementation can enhance website persuasiveness. To support a deeper understanding of persuasive e-commerce website design, this research, using the Persuasive Systems Design (PSD) model, identifies the distinct persuasive features currently assimilated in ten successful e-commerce websites. The results revealed extensive use of persuasive features; particularly features related to dialogue support, credibility support, and primary task support; thus highlighting weaknesses in the implementation of social support features. In conclusion we suggest possible ways for enhancing persuasive feature implementation via appropriate contextual examples and explanation.",
"title": ""
},
{
"docid": "2f9e5a34137fe7871c9388078c57dc8e",
"text": "This paper presents a new model of measuring semantic similarity in the taxonomy of WordNet. The model takes the path length between two concepts and IC value of each concept as its metric, furthermore, the weight of two metrics can be adapted artificially. In order to evaluate our model, traditional and widely used datasets are used. Firstly, coefficients of correlation between human ratings of similarity and six computational models are calculated, the result shows our new model outperforms their homologues. Then, the distribution graphs of similarity value of 65 word pairs are discussed our model having no faulted zone more centralized than other five methods. So our model can make up the insufficient of other methods which only using one metric(path length or IC value) in their model.",
"title": ""
},
{
"docid": "1056fbe244f25672680ea45d6e8a4c73",
"text": "In this paper, we address the problem of reconstructing an object’s surface from a single image using generative networks. First, we represent a 3D surface with an aggregation of dense point clouds from multiple views. Each point cloud is embedded in a regular 2D grid aligned on an image plane of a viewpoint, making the point cloud convolution-favored and ordered so as to fit into deep network architectures. The point clouds can be easily triangulated by exploiting connectivities of the 2D grids to form mesh-based surfaces. Second, we propose an encoder-decoder network that generates such kind of multiple view-dependent point clouds from a single image by regressing their 3D coordinates and visibilities. We also introduce a novel geometric loss that is able to interpret discrepancy over 3D surfaces as opposed to 2D projective planes, resorting to the surface discretization on the constructed meshes. We demonstrate that the multi-view point regression network outperforms state-of-the-art methods with a significant improvement on challenging datasets.",
"title": ""
},
{
"docid": "f9ba9cb0d10c6e44e40c7a5f06e87b5e",
"text": "Graphomotor impressions are a product of complex cognitive, perceptual and motor skills and are widely used as psychometric tools for the diagnosis of a variety of neuro-psychological disorders. Apparent deformations in these responses are quantified as errors and are used are indicators of various conditions. Contrary to conventional assessment methods where manual analysis of impressions is carried out by trained clinicians, an automated scoring system is marked by several challenges. Prior to analysis, such computerized systems need to extract and recognize individual shapes drawn by subjects on a sheet of paper as an important pre-processing step. The aim of this study is to apply deep learning methods to recognize visual structures of interest produced by subjects. Experiments on figures of Bender Gestalt Test (BGT), a screening test for visuo-spatial and visuo-constructive disorders, produced by 120 subjects, demonstrate that deep feature representation brings significant improvements over classical approaches. The study is intended to be extended to discriminate coherent visual structures between produced figures and expected prototypes.",
"title": ""
}
] | scidocsrr |
c39d3d007237b00c9aff9aaa4a0e6059 | EFFECTS OF INTERNET USE AND SOCIAL RESOURCES ON CHANGES IN DEPRESSION | [
{
"docid": "1a4a25e533adcd5ae0a1ce55ddcd80df",
"text": "The model introduced and tested in the current study suggests that lonely and depressed individuals may develop a preference for online social interaction, which, in turn, leads to negative outcomes associated with their Internet use. Participants completed measures of preference for online social interaction, depression, loneliness, problematic Internet use, and negative outcomes resulting from their Internet use. Results indicated that psychosocial health predicted levels of preference for online social interaction, which, in turn, predicted negative outcomes associated with problematic Internet use. In addition, the results indicated that the influence of psychosocial distress on negative outcomes due to Internet use is mediated by preference for online socialization and other symptoms of problematic Internet use. The results support the current hypothesis that that individuals’ preference for online, rather than face-to-face, social interaction plays an important role in the development of negative consequences associated with problematic Internet use.",
"title": ""
},
{
"docid": "a98c32ca34b5096a38d29a54ece2ba0b",
"text": "Those who feel better able to express their “true selves” in Internet rather than face-to-face interaction settings are more likely to form close relationships with people met on the Internet (McKenna, Green, & Gleason, this issue). Building on these correlational findings from survey data, we conducted three laboratory experiments to directly test the hypothesized causal role of differential self-expression in Internet relationship formation. Experiments 1 and 2, using a reaction time task, found that for university undergraduates, the true-self concept is more accessible in memory during Internet interactions, and the actual self more accessible during face-to-face interactions. Experiment 3 confirmed that people randomly assigned to interact over the Internet (vs. face to face) were better able to express their true-self qualities to their partners.",
"title": ""
}
] | [
{
"docid": "d81b67d0a4129ac2e118c9babb59299e",
"text": "Motivation\nA large number of newly sequenced proteins are generated by the next-generation sequencing technologies and the biochemical function assignment of the proteins is an important task. However, biological experiments are too expensive to characterize such a large number of protein sequences, thus protein function prediction is primarily done by computational modeling methods, such as profile Hidden Markov Model (pHMM) and k-mer based methods. Nevertheless, existing methods have some limitations; k-mer based methods are not accurate enough to assign protein functions and pHMM is not fast enough to handle large number of protein sequences from numerous genome projects. Therefore, a more accurate and faster protein function prediction method is needed.\n\n\nResults\nIn this paper, we introduce DeepFam, an alignment-free method that can extract functional information directly from sequences without the need of multiple sequence alignments. In extensive experiments using the Clusters of Orthologous Groups (COGs) and G protein-coupled receptor (GPCR) dataset, DeepFam achieved better performance in terms of accuracy and runtime for predicting functions of proteins compared to the state-of-the-art methods, both alignment-free and alignment-based methods. Additionally, we showed that DeepFam has a power of capturing conserved regions to model protein families. In fact, DeepFam was able to detect conserved regions documented in the Prosite database while predicting functions of proteins. Our deep learning method will be useful in characterizing functions of the ever increasing protein sequences.\n\n\nAvailability and implementation\nCodes are available at https://bhi-kimlab.github.io/DeepFam.",
"title": ""
},
{
"docid": "e1336d3d403f416c3899abf7386122d9",
"text": "Artificial synaptic devices have attracted a broad interest for hardware implementation of brain-inspired neuromorphic systems. In this letter, a short-term plasticity simulation in an indium-gallium-zinc oxide (IGZO) electric-double-layer (EDL) transistor is investigated. For synaptic facilitation and depression function emulation, three-terminal EDL transistor is reduced to a two-terminal synaptic device with two modified connection schemes. Furthermore, high-pass and low-pass filtering characteristics are also successfully emulated not only for fixed-rate spike train but also for Poisson-like spike train. Our results suggest that IGZO-based EDL transistors operated in two terminal mode can be used as the building blocks for brain-like chips and neuromorphic systems.",
"title": ""
},
{
"docid": "1f15775000a1837cfc168a91c4c1a2ae",
"text": "In the recent aging society, studies on health care services have been actively conducted to provide quality services to medical consumers in wire and wireless environments. However, there are some problems in these health care services due to the lack of personalized service and the uniformed way in services. For solving these issues, studies on customized services in medical markets have been processed. However, because a diet recommendation service is only focused on the personal disease information, it is difficult to provide specific customized services to users. This study provides a customized diet recommendation service for preventing and managing coronary heart disease in health care services. This service provides a customized diet to customers by considering the basic information, vital sign, family history of diseases, food preferences according to seasons and intakes for the customers who are concerning about the coronary heart disease. The users who receive this service can use a customized diet service differed from the conventional service and that supports continuous services and helps changes in customers living habits.",
"title": ""
},
{
"docid": "6bafdd357ad44debeda78d911a69da90",
"text": "We present a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent neural network that, given a set of city coordinates, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the recurrent neural network using a policy gradient method. Without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes. These results, albeit still quite far from state-of-the-art, give insights into how neural networks can be used as a general tool for tackling combinatorial optimization problems.",
"title": ""
},
{
"docid": "6b5599f9041ca5dab06620ce9ee9e2fb",
"text": "Poor nutrition can lead to reduced immunity, increased susceptibility to disease, impaired physical and mental development, and reduced productivity. A conversational agent can support people as a virtual coach, however building such systems still have its associated challenges and limitations. This paper describes the background and motivation for chatbot systems in the context of healthy nutrition recommendation. We discuss current challenges associated with chatbot application, we tackled technical, theoretical, behavioural, and social aspects of the challenges. We then propose a pipeline to be used as guidelines by developers to implement theoretically and technically robust chatbot systems. Keywords-Health, Conversational agent, Recommender systems, HCI, Behaviour Change, Artificial intelligence",
"title": ""
},
{
"docid": "1d9b1ce73d8d2421092bb5a70016a142",
"text": "Social networks have the surprising property of being \"searchable\": Ordinary people are capable of directing messages through their network of acquaintances to reach a specific but distant target person in only a few steps. We present a model that offers an explanation of social network searchability in terms of recognizable personal identities: sets of characteristics measured along a number of social dimensions. Our model defines a class of searchable networks and a method for searching them that may be applicable to many network search problems, including the location of data files in peer-to-peer networks, pages on the World Wide Web, and information in distributed databases.",
"title": ""
},
{
"docid": "068a8ad7161ed2c4af5e5c3208c35c00",
"text": "Two field studies and a laboratory study examined the influence of reward for high performance on experienced performance pressure, intrinsic interest and creativity. Study 1 found that employees’ expected reward for high performance was positively related to performance pressure which, in turn, was positively associated with the employees’ interest in their jobs. Study 2 replicated this finding and showed that intrinsic interest, produced by performance pressure, was positively related to supervisors’ ratings of creative performance. Study 3 found that college students’ receipt of reward for high performance increased their experienced performance pressure which, in turn, was positively related to intrinsic interest and creativity. Copyright # 2008 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "6f19b45fbbe4385f86e345d4f5de2219",
"text": "Objective To evaluate ten-year survival and clinical performance of resin-based composite restorations placed at increased vertical dimension as a 'Dahl' type appliance to manage localised anterior tooth wear.Design A prospective survival analysis of restorations provided at a single centre.Setting UK NHS hospital and postgraduate institute.Methods The clinical performance of 283 composite resin restorations on 26 patients with localised anterior tooth wear was reviewed after a ten year follow-up period. The study used modified United States Public Health Service (USPHS) criteria for assessing the restorations. Survival of the restorations was analysed using Kaplan-Meier survival curves, the log-rank test, and the Cox proportional hazards regression analysis.Results The results indicated that the median survival time for composite resin restorations was 5.8 years and 4.75 years for replacement restorations when all types of failure were considered. The restorations commonly failed as a result of wear, fracture and marginal discoloration. The factors that significantly influenced the survival of these restorations were the incisal relationship, aetiology, material used, and the nature of opposing dentition. The biological complications associated with this treatment regime were rare. Patient satisfaction remained high despite the long term deterioration of the restorations.Conclusion With some degree of maintenance, repeated use of composite resin restorations to treat localised anterior tooth wear at an increased occlusal vertical dimension is a viable treatment option over a ten-year period.",
"title": ""
},
{
"docid": "db76ba085f43bc826f103c6dd4e2ebb5",
"text": "It has been shown that Chinese poems can be successfully generated by sequence-to-sequence neural models, particularly with the attention mechanism. A potential problem of this approach, however, is that neural models can only learn abstract rules, while poem generation is a highly creative process that involves not only rules but also innovations for which pure statistical models are not appropriate in principle. This work proposes a memory-augmented neural model for Chinese poem generation, where the neural model and the augmented memory work together to balance the requirements of linguistic accordance and aesthetic innovation, leading to innovative generations that are still rule-compliant. In addition, it is found that the memory mechanism provides interesting flexibility that can be used to generate poems with different styles.",
"title": ""
},
{
"docid": "f4ebbcebefbcc1ba8b6f8e5bf6096645",
"text": "With advances in wireless communication technology, more and more people depend heavily on portable mobile devices for businesses, entertainments and social interactions. Although such portable mobile devices can offer various promising applications, their computing resources remain limited due to their portable size. This however can be overcome by remotely executing computation-intensive tasks on clusters of near by computers known as cloudlets. As increasing numbers of people access the Internet via mobile devices, it is reasonable to envision in the near future that cloudlet services will be available for the public through easily accessible public wireless metropolitan area networks (WMANs). However, the outdated notion of treating cloudlets as isolated data-centers-in-a-box must be discarded as there are clear benefits to connecting multiple cloudlets together to form a network. In this paper we investigate how to balance the workload between multiple cloudlets in a network to optimize mobile application performance. We first introduce a system model to capture the response times of offloaded tasks, and formulate a novel optimization problem, that is to find an optimal redirection of tasks between cloudlets such that the maximum of the average response times of tasks at cloudlets is minimized. We then propose a fast, scalable algorithm for the problem. We finally evaluate the performance of the proposed algorithm through experimental simulations. The experimental results demonstrate the significant potential of the proposed algorithm in reducing the response times of tasks.",
"title": ""
},
{
"docid": "4f560deecd54c9b809ce1a1e04512926",
"text": "BACKGROUND\nNurses in Sweden have a high absence due to illness and many retire before the age of sixty. Factors at work as well as in private life may contribute to health problems. To maintain a healthy work-force there is a need for actions on work-life balance in a salutogenic perspective. The aim of this study was to explore perceptions of resources in everyday life to balance work and private life among nurses in home help service.\n\n\nMETHODS\nThirteen semi-structured individual interviews and two focus group interviews were conducted with home help service nurses in Sweden. A qualitative content analysis was used for the analyses.\n\n\nRESULT\nIn the analyses, six themes of perceptions of recourses in everyday life emerged; (i) Reflecting on life. (ii) Being healthy and taking care of yourself. (iii) Having a meaningful job and a supportive work climate. (iv) Working shifts and part time. (v) Having a family and a supporting network. (vi) Making your home your castle.\n\n\nCONCLUSIONS\nThe result points out the complexity of work-life balance and support that the need for nurses to balance everyday life differs during different phases and transitions in life. In this salutogenic study, the result differs from studies with a pathogenic approach. Shift work and part time work were seen as two resources that contributed to flexibility and a prerequisite to work-life balance. To have time and energy for both private life and work was seen as essential. To reflect on and discuss life gave inner strength to set boundaries and to prioritize both in private life and in work life. Managers in nursing contexts have a great challenge to maintain and strengthen resources which enhance the work-life balance and health of nurses. Salutogenic research is needed to gain an understanding of resources that enhance work-life balance and health in nursing contexts.",
"title": ""
},
{
"docid": "dfb979060d5a1b8b7f5ff59957aa6b8e",
"text": "The present investigation provided a theoretically-driven analysis testing whether body shame helped account for the predicted positive associations between explicit weight bias in the form of possessing anti-fat attitudes (i.e., dislike, fear of fat, and willpower beliefs) and engaging in fat talk among 309 weight-diverse college women. We also evaluated whether self-compassion served as a protective factor in these relationships. Robust non-parametric bootstrap resampling procedures adjusted for body mass index (BMI) revealed stronger indirect and conditional indirect effects for dislike and fear of fat attitudes and weaker, marginal effects for the models inclusive of willpower beliefs. In general, the indirect effect of anti-fat attitudes on fat talk via body shame declined with increasing levels of self-compassion. Our preliminary findings may point to useful process variables to target in mitigating the impact of endorsing anti-fat prejudice on fat talk in college women and may help clarify who is at higher risk.",
"title": ""
},
{
"docid": "00cdaa724f262211919d4c7fc5bb0442",
"text": "With Tor being a popular anonymity network, many attacks have been proposed to break its anonymity or leak information of a private communication on Tor. However, guaranteeing complete privacy in the face of an adversary on Tor is especially difficult because Tor relays are under complete control of world-wide volunteers. Currently, one can gain private information, such as circuit identifiers and hidden service identifiers, by running Tor relays and can even modify their behaviors with malicious intent. This paper presents a practical approach to effectively enhancing the security and privacy of Tor by utilizing Intel SGX, a commodity trusted execution environment. We present a design and implementation of Tor, called SGX-Tor, that prevents code modification and limits the information exposed to untrusted parties. We demonstrate that our approach is practical and effectively reduces the power of an adversary to a traditional network-level adversary. Finally, SGX-Tor incurs moderate performance overhead; the end-to-end latency and throughput overheads for HTTP connections are 3.9% and 11.9%, respectively.",
"title": ""
},
{
"docid": "d35bc5ef2ea3ce24bbba87f65ae93a25",
"text": "Fog computing, complementary to cloud computing, has recently emerged as a new paradigm that extends the computing infrastructure from the center to the edge of the network. This article explores the design of a fog computing orchestration framework to support IoT applications. In particular, we focus on how the widely adopted cloud computing orchestration framework can be customized to fog computing systems. We first identify the major challenges in this procedure that arise due to the distinct features of fog computing. Then we discuss the necessary adaptations of the orchestration framework to accommodate these challenges.",
"title": ""
},
{
"docid": "0ac679740e0e3911af04be9464f76a7d",
"text": "Max-Min Fairness is a flexible resource allocation mechanism used in most datacenter schedulers. However, an increasing number of jobs have hard placement constraints, restricting the machines they can run on due to special hardware or software requirements. It is unclear how to define, and achieve, max-min fairness in the presence of such constraints. We propose Constrained Max-Min Fairness (CMMF), an extension to max-min fairness that supports placement constraints, and show that it is the only policy satisfying an important property that incentivizes users to pool resources. Optimally computing CMMF is challenging, but we show that a remarkably simple online scheduler, called Choosy, approximates the optimal scheduler well. Through experiments, analysis, and simulations, we show that Choosy on average differs 2% from the optimal CMMF allocation, and lets jobs achieve their fair share quickly.",
"title": ""
},
{
"docid": "edcf1cb4d09e0da19c917eab9eab3b23",
"text": "The paper describes a computerized process of myocardial perfusion diagnosis from cardiac single proton emission computed tomography (SPECT) images using data mining and knowledge discovery approach. We use a six-step knowledge discovery process. A database consisting of 267 cleaned patient SPECT images (about 3000 2D images), accompanied by clinical information and physician interpretation was created first. Then, a new user-friendly algorithm for computerizing the diagnostic process was designed and implemented. SPECT images were processed to extract a set of features, and then explicit rules were generated, using inductive machine learning and heuristic approaches to mimic cardiologist's diagnosis. The system is able to provide a set of computer diagnoses for cardiac SPECT studies, and can be used as a diagnostic tool by a cardiologist. The achieved results are encouraging because of the high correctness of diagnoses.",
"title": ""
},
{
"docid": "622d11d4eefeacbed785ee6fcc14b69b",
"text": "n our nursing program, we require a transcript for every course taken at any university or college, and it is always frustrating when we have to wait for copies to arrive before making our decisions. To be honest, if a candidate took Religion 101 at a community college and later transferred to the BSN program, I would be willing to pass on the community college transcript, but the admissions office is less flexible. And, although we used to be able to ask the student to have another copy sent if we did not have a transcript in the file, we now must wait for the student to have the college upload the transcript into an admissions system andwait for verification. I can assure you, most nurses, like other students today, take a lot of courses across many colleges without getting a degree. I sometimes have as many as 10 transcripts to review. When I saw an article titled “Blockchain: Letting Students Own Their Credentials” (Schaffnauser, 2017), I was therefore intrigued. I had already heard of blockchain as a tool to take the middleman out of the loop when doing financial transactions with Bitcoin. Now the thought of students owning their own credentials got me thinking about the movement toward new forms of credentialing from professional organizations (e.g., badges, certification documents). Hence, my decision to explore blockchain and its potential. Let’s start with some definitions. Simply put, blockchain is a distributed digital ledger. Technically speaking, it is “a peer-to-peer (P2P) distributed ledger technology for a new generation of transactional applications that establishes transparency and trust” (Linn & Koo, n.d.). Watter (2016) noted that “the blockchain is a distributed database that provides an unalterable, (semi-) public record of digital transactions. Each block aggregates a timestamped batch of transactions to be included in the ledger — or rather, in the blockchain. Each block is identified by a cryptographic signature. The blockchain contains an un-editable record of all the transactions made.” If we take this apart, here is what we have: a database that is distributed to computers associated with members of the network. Thus, rather than trying to access one central database, all members have copies of the database. Each time a transaction occurs, it is placed in a block that is given a time stamp and is “digitally signed using public key cryptography — which uses both a public and private key” (Watter, 2016). Locks are then connected so there is a historical record and they cannot be altered. According to Lin and Koo",
"title": ""
},
{
"docid": "3415fb5e9b994d6015a17327fc0fe4f4",
"text": "A human stress monitoring patch integrates three sensors of skin temperature, skin conductance, and pulsewave in the size of stamp (25 mm × 15 mm × 72 μm) in order to enhance wearing comfort with small skin contact area and high flexibility. The skin contact area is minimized through the invention of an integrated multi-layer structure and the associated microfabrication process; thus being reduced to 1/125 of that of the conventional single-layer multiple sensors. The patch flexibility is increased mainly by the development of flexible pulsewave sensor, made of a flexible piezoelectric membrane supported by a perforated polyimide membrane. In the human physiological range, the fabricated stress patch measures skin temperature with the sensitivity of 0.31 Ω/°C, skin conductance with the sensitivity of 0.28 μV/0.02 μS, and pulse wave with the response time of 70 msec. The skin-attachable stress patch, capable to detect multimodal bio-signals, shows potential for application to wearable emotion monitoring.",
"title": ""
},
{
"docid": "b7c9e2900423a0cd7cc21c3aa95ca028",
"text": "In this article, the state of the art of research on emotion work (emotional labor) is summarized with an emphasis on its effects on well-being. It starts with a definition of what emotional labor or emotion work is. Aspects of emotion work, such as automatic emotion regulation, surface acting, and deep acting, are discussed from an action theory point of view. Empirical studies so far show that emotion work has both positive and negative effects on health. Negative effects were found for emotional dissonance. Concepts related to the frequency of emotion expression and the requirement to be sensitive to the emotions of others had both positive and negative effects. Control and social support moderate relations between emotion work variables and burnout and job satisfaction. Moreover, there is empirical evidence that the cooccurrence of emotion work and organizational problems leads to high levels of burnout. D 2002 Published by Elsevier Science Inc.",
"title": ""
},
{
"docid": "bc72b7e2a2b151d9396cd9e51c049e9a",
"text": "Low resourced languages suffer from limited training data and resources. Data augmentation is a common approach to increasing the amount of training data. Additional data is synthesized by manipulating the original data with a variety of methods. Unlike most previous work that focuses on a single technique, we combine multiple, complementary augmentation approaches. The first stage adds noise and perturbs the speed of additional copies of the original audio. The data is further augmented in a second stage, where a novel fMLLR-based augmentation is applied to bottleneck features to further improve performance. A reduction in word error rate is demonstrated on four languages from the IARPA Babel program. We present an analysis exploring why these techniques are beneficial.",
"title": ""
}
] | scidocsrr |
77471bab1c814fe955730bc9b60d8fef | Efficient Storage of Multi-Sensor Object-Tracking Data | [
{
"docid": "4fa73e04ccc8620c12aaea666ea366a6",
"text": "The popularity of the Web and Internet commerce provides many extremely large datasets from which information can be gleaned by data mining. This book focuses on practical algorithms that have been used to solve key problems in data mining and can be used on even the largest datasets. It begins with a discussion of the map-reduce framework, an important tool for parallelizing algorithms automatically. The tricks of locality-sensitive hashing are explained. This body of knowledge, which deserves to be more widely known, is essential when seeking similar objects in a very large collection without having to compare each pair of objects. Stream processing algorithms for mining data that arrives too fast for exhaustive processing are also explained. The PageRank idea and related tricks for organizing the Web are covered next. Other chapters cover the problems of finding frequent itemsets and clustering, each from the point of view that the data is too large to fit in main memory, and two applications: recommendation systems and Web advertising, each vital in e-commerce. This second edition includes new and extended coverage on social networks, machine learning and dimensionality reduction. Written by leading authorities in database and web technologies, it is essential reading for students and practitioners alike",
"title": ""
}
] | [
{
"docid": "5b0e33ede34f6532a48782e423128f49",
"text": "The literature on globalisation reveals wide agreement concerning the relevance of international sourcing strategies as key competitive factors for companies seeking globalisation, considering such strategies to be a purchasing management approach focusing on supplies from vendors in the world market, rather than relying exclusively on domestic offerings (Petersen, Frayer, & Scannel, 2000; Stevens, 1995; Trent & Monczka, 1998). Thus, the notion of “international sourcing” mentioned by these authors describes the level of supply globalisation in companies’ purchasing strategy, as related to supplier source (Giunipero & Pearcy, 2000; Levy, 1995; Trent & Monczka, 2003b).",
"title": ""
},
{
"docid": "c296244ea4283a43623d3a3aabd4d672",
"text": "With growing interest in Chinese Language Processing, numerous NLP tools (e.g., word segmenters, part-of-speech taggers, and parsers) for Chinese have been developed all over the world. However, since no large-scale bracketed corpora are available to the public, these tools are trained on corpora with different segmentation criteria, part-of-speech tagsets and bracketing guidelines, and therefore, comparisons are difficult. As a first step towards addressing this issue, we have been preparing a large bracketed corpus since late 1998. The first two installments of the corpus, 250 thousand words of data, fully segmented, POS-tagged and syntactically bracketed, have been released to the public via LDC (www.ldc.upenn.edu). In this paper, we discuss several Chinese linguistic issues and their implications for our treebanking efforts and how we address these issues when developing our annotation guidelines. We also describe our engineering strategies to improve speed while ensuring annotation quality.",
"title": ""
},
{
"docid": "680b2b1c938e381b4070a4d0a44d4ec8",
"text": "The significance of aligning IT with corporate strategy is widely recognized, but the lack of appropriate methodologies prevented practitioners from integrating IT projects with competitive strategies effectively. This article addresses the issue of deploying Web services strategically using the concept of a widely accepted management tool, the balanced scorecard. A framework is developed to match potential benefits of Web services with corporate strategy in four business dimensions: innovation and learning, internal business process, customer, and financial. It is argued that the strategic benefits of implementing Web services can only be realized if the Web services initiatives are planned and implemented within the framework of an IT strategy that is designed to support the business strategy of a firm.",
"title": ""
},
{
"docid": "f266646478196476fb93ea507ea6e23e",
"text": "The aim of this paper is to develop a human tracking system that is resistant to environmental changes and covers wide area. Simply structured floor sensors are low-cost and can track people in a wide area. However, the sensor reading is discrete and missing; therefore, footsteps do not represent the precise location of a person. A Markov chain Monte Carlo method (MCMC) is a promising tracking algorithm for these kinds of signals. We applied two prediction models to the MCMC: a linear Gaussian model and a highly nonlinear bipedal model. The Gaussian model was efficient in terms of computational cost while the bipedal model discriminated people more accurate than the Gaussian model. The Gaussian model can be used to track a number of people, and the bipedal model can be used in situations where more accurate tracking is required.",
"title": ""
},
{
"docid": "9c9c031767526777ee680f184de4b092",
"text": "The study of interleukin-23 (IL-23) over the past 8 years has led to the realization that cellular immunity is far more complex than previously appreciated, because it is controlled by additional newly identified players. From the analysis of seemingly straightforward cytokine regulation of autoimmune diseases, many limitations of the established paradigms emerged that required reevaluation of the 'rules' that govern the initiation and maintenance of immune responses. This information led to a major revision of the T-helper 1 (Th1)/Th2 hypothesis and discovery of an unexpected link between transforming growth factor-beta-dependent Th17 and inducible regulatory T cells. The aim of this review is to explore the multiple characteristics of IL-23 with respect to its 'id' in autoimmunity, 'ego' in T-cell help, and 'superego' in defense against mucosal pathogens.",
"title": ""
},
{
"docid": "e1d0c07f9886d3258f0c5de9dd372e17",
"text": "strategies and tools must be based on some theory of learning and cognition. Of course, crafting well-articulated views that clearly answer the major epistemological questions of human learning has exercised psychologists and educators for centuries. What is a mind? What does it mean to know something? How is our knowledge represented and manifested? Many educators prefer an eclectic approach, selecting “principles and techniques from the many theoretical perspectives in much the same way we might select international dishes from a smorgasbord, choosing those we like best and ending up with a meal which represents no nationality exclusively and a design technology based on no single theoretical base” (Bednar et al., 1995, p. 100). It is certainly the case that research within collaborative educational learning tools has drawn upon behavioral, cognitive information processing, humanistic, and sociocultural theory, among others, for inspiration and justification. Problems arise, however, when tools developed in the service of one epistemology, say cognitive information processing, are integrated within instructional systems designed to promote learning goals inconsistent with it. When concepts, strategies, and tools are abstracted from the theoretical viewpoint that spawned them, they are too often stripped of meaning and utility. In this chapter, we embed our discussion in learner-centered, constructivist, and sociocultural perspectives on collaborative technology, with a bias toward the third. The principles of these perspectives, in fact, provide the theoretical rationale for much of the research and ideas presented in this book. 2",
"title": ""
},
{
"docid": "bcc00e5db8f484a37528aae2740314f4",
"text": "Multi-Instance Multi-Label (MIML) is a learning framework where an example is associated with multiple labels and represented by a set of feature vectors (multiple instances). In the formalization of MIML learning, instances come from a single source (single view). To leverage multiple information sources (multi-view), we develop a multi-view MIML framework based on hierarchical Bayesian Network, and derive an effective learning algorithm based on variational inference. The model can naturally deal with examples in which some views could be absent (partial examples). On multi-view datasets, it is shown that our method is better than other multi-view and single-view approaches particularly in the presence of partial examples. On single-view benchmarks, extensive evaluation shows that our method is highly competitive or better than other MIML approaches on labeling examples and instances. Moreover, our method can effectively handle datasets with a large number of labels.",
"title": ""
},
{
"docid": "ffaa8edb1fccf68e6b7c066fb994510a",
"text": "A fast and precise determination of the DOA (direction of arrival) for immediate object classification becomes increasingly important for future automotive radar generations. Hereby, the elevation angle of an object is considered as a key parameter especially in complex urban environments. An antenna concept allowing the determination of object angles in azimuth and elevation is proposed and discussed in this contribution. This antenna concept consisting of a linear patch array and a cylindrical dielectric lens is implemented into a radar sensor and characterized in terms of angular accuracy and ambiguities using correlation algorithms and the CRLB (Cramer Rao Lower Bound).",
"title": ""
},
{
"docid": "aa65dc18169238ef973ef24efb03f918",
"text": "A number of national studies point to a trend in which highly selective and elite private and public universities are becoming less accessible to lower-income students. At the same time there have been surprisingly few studies of the actual characteristics and academic experiences of low-income students or comparisons of their undergraduate experience with those of more wealthy students. This paper explores the divide between poor and rich students, first comparing a group of selective US institutions and their number and percentage of Pell Grant recipients and then, using institutional data and results from the University of California Undergraduate Experience Survey (UCUES), presenting an analysis of the high percentage of low-income undergraduate students within the University of California system — who they are, their academic performance and quality of their undergraduate experience. Among our conclusions: The University of California has a strikingly higher number of lowincome students when compared to a sample group of twenty-four other selective public and private universities and colleges, including the Ivy Leagues and a sub-group of other California institutions such as Stanford and the University of Southern California. Indeed, the UC campuses of Berkeley, Davis, and UCLA each have more Pell Grant students than all of the eight Ivy League institutions combined. However, one out of three Pell Grant recipients at UC have at least one parent with a four-year college degree, calling into question the assumption that “low-income” and “first-generation” are interchangeable groups of students. Low-income students, and in particular Pell Grant recipients, at UC have only slightly lower GPAs than their more wealthy counterparts in both math, science and engineering, and in humanities and social science fields. Contrary to some previous research, we find that low-income students have generally the same academic and social satisfaction levels; and are similar in their sense of belonging within their campus communities. However, there are some intriguing results across UC campuses, with low-income students somewhat less satisfied at those campuses where there are more affluent student bodies and where lower-income students have a smaller presence. An imbalance between rich and poor is the oldest and most fatal ailment of all republics — Plutarch There has been a growing and renewed concern among scholars of higher education and policymakers about increasing socioeconomic disparities in American society. Not surprisingly, these disparities are increasingly reflected * The SERU Project is a collaborative study based at the Center for Studies in Higher Education at UC Berkeley and focused on developing new types of data and innovative policy relevant scholarly analyses on the academic and civic experience of students at major research universities. For further information on the project, see http://cshe.berkeley.edu/research/seru/ ** John Aubrey Douglass is Senior Research Fellow – Public Policy and Higher Education at the Center for Studies in Higher Education at UC Berkeley and coPI of the SERU Project; Gregg Thomson is Director of the Office of Student Research at UC Berkeley and a co-PI of the SERU Project. We would like to thank David Radwin at OSR and a SERU Project Research Associate for his collaboration with data analysis. Douglass and Thomson: Poor and Rich 2 CSHE Research & Occasional Paper Series in the enrollment of students in the nation’s cadre of highly selective, elite private universities, and increasingly among public universities. Particularly over the past three decades, “brand name” prestige private universities and colleges have moved to a high tuition fee and high financial aid model, with the concept that a significant portion of generated tuition revenue can be redirected toward financial aid for either low-income or merit-based scholarships. With rising costs, declining subsidization by state governments, and the shift of federal financial aid toward loans versus grants in aid, public universities are moving a low fee model toward what is best called a moderate fee and high financial aid model – a model that is essentially evolving. There is increasing evidence, however, that neither the private nor the evolving public tuition and financial aid model is working. Students from wealthy families congregate at the most prestigious private and public institutions, with significant variance depending on the state and region of the nation, reflecting the quality and composition of state systems of higher education. A 2004 study by Sandy Astin and Leticia Oseguera looked at a number of selective private and public universities and concluded that the number and percentage of low-income and middle-income families had declined while the number from wealthy families increased. “American higher education, in other words, is more socioeconomically stratified today than at any other time during the past three decades,” they note. One reason, they speculated, may be “the increasing competitiveness among prospective college students for admission to the country’s most selective colleges and universities” (Astin and Oseguera 2004). A more recent study by Danette Gerald and Kati Haycock (2006) looked at the socioeconomic status (SES) of undergraduate students at a selective group of fifty “premier” public universities and had a similar conclusion – but one more alarming because of the important historical mission of public universities to provide broad access, a formal mandate or social contract. Though more open to students from low-income families than their private counterparts, the premier publics had declined in the percentage of students with federally funded Pell Grants (federal grants to students generally with family incomes below $40,000 annually) when compared to other four-year public institutions in the nation. Ranging from $431 to a maximum of $4,731, Pell Grants, and the criteria for selection of recipients, has long served as a benchmark on SES access. Pell Grant students have, on average, a family income of only $19,300. On average, note Gerald and Haycock, the selected premier publics have some 22% of their enrolled undergraduates with Pell Grants; all public four-year institutions have some 31% with Pell Grants; private institutions have an average of around 14% (Gerald and Haycock 2006). But it is important to note that there are a great many dimensions in understanding equity and access among private and public higher education institutions (HEIs). For one, there is a need to disaggregate types of institutions, for example, private versus public, university versus community college. Public and private institutions, and particularly highly selective universities and colleges, tend to draw from different demographic pools, with public universities largely linked to the socioeconomic stratification of their home state. Second, there are the factors related to rising tuition and increasingly complicated and, one might argue, inadequate approaches to financial aid in the U.S. With the slow down in the US economy, the US Department of Education recently estimated that demand for Pell Grants was exceeded projected demand by some 800,000 students; total applications for the grant program are up 16 percent over the previous year. This will require an additional $6 billion to the Pell Grant’s current budget of $14 billion next year.1 Economic downturns tend to push demand up for access to higher education among the middle and lower class, although most profoundly at the community college level. This phenomenon plus continued growth in the nation’s population, and in particularly in states such as California, Texas and Florida, means an inadequate financial aid system, where the maximum Pell Grant award has remained largely the same for the last decade when adjusted for inflation, will be further eroded. But in light of the uncertainty in the economy and the lack of resolve at the federal level to support higher education, it is not clear the US government will fund the increased demand – it may cut the maximum award. And third, there are larger social trends, such as increased disparities in income and the erosion of public services, declines in the quality of many public schools, the stagnation and real declines for some socioeconomic groups in high school graduation rates; and the large increase in the number of part-time students, most of whom must work to stay financially solvent. Douglass and Thomson: Poor and Rich 3 CSHE Research & Occasional Paper Series This paper examines low-income, and upper income, student access to the University of California and how lowincome access compares with a group of elite privates (specifically Ivy League institutions) and selective publics. Using data from the University of California’s Undergraduate Experience Survey (UCUES) and institutional data, we discuss what makes UC similar and different in the SES and demographic mix of students. Because the maximum Pell Grant is under $5,000, the cost of tuition alone is higher in the publics, and much higher in our group of selective privates, the percentage and number of Pell Grant students at an institution provides evidence of its resolve, creativity, and financial commitment to admit and enroll working and middle-class students. We then analyze the undergraduate experience of our designation of poor students (defined for this analysis as Pell Grant recipients) and rich students (from high-income families, defined as those with household incomes above $125,000 and no need-based aid).2 While including other income groups, we use these contrasting categories of wealth to observe differences in the background of students, their choice of major, general levels of satisfaction, academic performance, and sense of belonging at the university. There is very little analytical work on the characteristics and percepti",
"title": ""
},
{
"docid": "5506207c5d11a464b1bca39d6092089e",
"text": "Scalp recorded event-related potentials were used to investigate the neural activity elicited by emotionally negative and emotionally neutral words during the performance of a recognition memory task. Behaviourally, the principal difference between the two word classes was that the false alarm rate for negative items was approximately double that for the neutral words. Correct recognition of neutral words was associated with three topographically distinct ERP memory 'old/new' effects: an early, bilateral, frontal effect which is hypothesised to reflect familiarity-driven recognition memory; a subsequent left parietally distributed effect thought to reflect recollection of the prior study episode; and a late onsetting, right-frontally distributed effect held to be a reflection of post-retrieval monitoring. The old/new effects elicited by negative words were qualitatively indistinguishable from those elicited by neutral items and, in the case of the early frontal effect, of equivalent magnitude also. However, the left parietal effect for negative words was smaller in magnitude and shorter in duration than that elicited by neutral words, whereas the right frontal effect was not evident in the ERPs to negative items. These differences between neutral and negative words in the magnitude of the left parietal and right frontal effects were largely attributable to the increased positivity of the ERPs elicited by new negative items relative to the new neutral items. Together, the behavioural and ERP findings add weight to the view that emotionally valenced words influence recognition memory primarily by virtue of their high levels of 'semantic cohesion', which leads to a tendency for 'false recollection' of unstudied items.",
"title": ""
},
{
"docid": "980dc3d4b01caac3bf56df039d5ca513",
"text": "In this paper, we study object detection using a large pool of unlabeled images and only a few labeled images per category, named \"few-example object detection\". The key challenge consists in generating trustworthy training samples as many as possible from the pool. Using few training examples as seeds, our method iterates between model training and high-confidence sample selection. In training, easy samples are generated first and, then the poorly initialized model undergoes improvement. As the model becomes more discriminative, challenging but reliable samples are selected. After that, another round of model improvement takes place. To further improve the precision and recall of the generated training samples, we embed multiple detection models in our framework, which has proven to outperform the single model baseline and the model ensemble method. Experiments on PASCAL VOC'07, MS COCO'14, and ILSVRC'13 indicate that by using as few as three or four samples selected for each category, our method produces very competitive results when compared to the state-of-the-art weakly-supervised approaches using a large number of image-level labels.",
"title": ""
},
{
"docid": "7346e00ebadc27c1656e381dbbe39dd0",
"text": "This paper introduces a probabilistic framework for k-shot image classification. The goal is to generalise from an initial large-scale classification task to a separate task comprising new classes and small numbers of examples. The new approach not only leverages the feature-based representation learned by a neural network from the initial task (representational transfer), but also information about the classes (concept transfer). The concept information is encapsulated in a probabilistic model for the final layer weights of the neural network which acts as a prior for probabilistic k-shot learning. We show that even a simple probabilistic model achieves state-ofthe-art on a standard k-shot learning dataset by a large margin. Moreover, it is able to accurately model uncertainty, leading to well calibrated classifiers, and is easily extensible and flexible, unlike many recent approaches to k-shot learning.",
"title": ""
},
{
"docid": "eaca5794d84a96f8c8e7807cf83c3f00",
"text": "Background Women represent 15% of practicing general surgeons. Gender-based discrimination has been implicated as discouraging women from surgery. We sought to determine women's perceptions of gender-based discrimination in the surgical training and working environment. Methods Following IRB approval, we fielded a pilot survey measuring perceptions and impact of gender-based discrimination in medical school, residency training, and surgical practice. It was sent electronically to 1,065 individual members of the Association of Women Surgeons. Results We received 334 responses from medical students, residents, and practicing physicians with a response rate of 31%. Eighty-seven percent experienced gender-based discrimination in medical school, 88% in residency, and 91% in practice. Perceived sources of gender-based discrimination included superiors, physician peers, clinical support staff, and patients, with 40% emanating from women and 60% from men. Conclusions The majority of responses indicated perceived gender-based discrimination during medical school, residency, and practice. Gender-based discrimination comes from both sexes and has a significant impact on women surgeons.",
"title": ""
},
{
"docid": "1682c1be8397a4d8e859e76cdc849740",
"text": "With the advent of RFLPs, genetic linkage maps are now being assembled for a number of organisms including both inbred experimental populations such as maize and outbred natural populations such as humans. Accurate construction of such genetic maps requires multipoint linkage analysis of particular types of pedigrees. We describe here a computer package, called MAPMAKER, designed specifically for this purpose. The program uses an efficient algorithm that allows simultaneous multipoint analysis of any number of loci. MAPMAKER also includes an interactive command language that makes it easy for a geneticist to explore linkage data. MAPMAKER has been applied to the construction of linkage maps in a number of organisms, including the human and several plants, and we outline the mapping strategies that have been used.",
"title": ""
},
{
"docid": "7cd655bbea3b088618a196382b33ed1e",
"text": "Story generation is a well-recognized task in computational creativity research, but one that can be difficult to evaluate empirically. It is often inefficient and costly to rely solely on human feedback for judging the quality of generated stories. We address this by examining the use of linguistic analyses for automated evaluation, using metrics from existing work on predicting writing quality. We apply these metrics specifically to story continuation, where a model is given the beginning of a story and generates the next sentence, which is useful for systems that interactively support authors’ creativity in writing. We compare sentences generated by different existing models to human-authored ones according to the analyses. The results show some meaningful differences between the models, suggesting that this evaluation approach may be advantageous for future research.",
"title": ""
},
{
"docid": "e50ba614fc997f058f8d495b59c18af5",
"text": "We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according to properties of intermediate nodes; and joins the resulting semantic relations across the edit sequence. A computational implementation of the model achieves 70% accuracy and 89% precision on the FraCaS test suite. Moreover, including this model as a component in an existing system yields significant performance gains on the Recognizing Textual Entailment challenge.",
"title": ""
},
{
"docid": "8fd762096225ed2474ed740835f5268d",
"text": "In recent years, we have witnessed a huge diffusion of building information modeling (BIM) approaches in the field of architectural design, although very little research has been undertaken to explore the value, criticalities, and advantages attributable to the application of these methodologies in the cultural heritage domain. Furthermore, the last developments in digital photogrammetry lead to the easy generation of reliable low-cost three-dimensional textured models that could be used in BIM platforms to create semanticaware objects that could compose a specific library of historical architectural elements. In this case, the transfer between the point cloud and its corresponding parametric model is not so trivial and the level of geometrical abstraction could not be suitable with the scope of the BIM. The aim of this paper is to explore and retrace the milestone works on this crucial topic in order to identify the unsolved issues and to propose and test a unique and simple workflow practitioner centered and based on the use of the latest available solutions for point cloud managing into commercial BIM platforms. © 2016 SPIE and IS&T [DOI: 10.1117/1.JEI.26.1.011007]",
"title": ""
},
{
"docid": "293cdb11d0701f9bd2ccfe82bc457ab8",
"text": "Modern neural network models have achieved the state-ofthe-art performance on relation extraction (RE) tasks. Although distant supervision (DS) can automatically generate training labels for RE, the effectiveness of DS highly depends on datasets and relation types, and sometimes it may introduce large labeling noises. In this paper, we propose a deep pattern diagnosis framework, DIAG-NRE, that aims to diagnose and improve neural relation extraction (NRE) models trained on DS-generated data. DIAG-NRE includes three stages: (1) The deep pattern extraction stage employs reinforcement learning to extract regular-expression-style patterns from NRE models. (2) The pattern refinement stage builds a pattern hierarchy to find the most representative patterns and lets human reviewers evaluate them quantitatively by annotating a certain number of pattern-matched examples. In this way, we minimize both the number of labels to annotate and the difficulty of writing heuristic patterns. (3) The weak label fusion stage fuses multiple weak label sources, including DS and refined patterns, to produce noise-reduced labels that can train a better NRE model. To demonstrate the broad applicability of DIAG-NRE, we use it to diagnose 14 relation types of two public datasets with one simple hyperparameter configuration. We observe different noise behaviors and obtain significant F1 improvements on all relation types suffering from large labeling noises.",
"title": ""
},
{
"docid": "357e09114978fc0ac1fb5838b700e6ca",
"text": "Instance level video object segmentation is an important technique for video editing and compression. To capture the temporal coherence, in this paper, we develop MaskRNN, a recurrent neural net approach which fuses in each frame the output of two deep nets for each object instance — a binary segmentation net providing a mask and a localization net providing a bounding box. Due to the recurrent component and the localization component, our method is able to take advantage of long-term temporal structures of the video data as well as rejecting outliers. We validate the proposed algorithm on three challenging benchmark datasets, the DAVIS-2016 dataset, the DAVIS-2017 dataset, and the Segtrack v2 dataset, achieving state-of-the-art performance on all of them.",
"title": ""
},
{
"docid": "a6a98d0599c1339c1f2c6a6c7525b843",
"text": "We consider a generalized version of the Steiner problem in graphs, motivated by the wire routing phase in physical VLSI design: given a connected, undirected distance graph with required classes of vertices and Steiner vertices, find a shortest connected subgraph containing at least one vertex of each required class. We show that this problem is NP-hard, even if there are no Steiner vertices and the graph is a tree. Moreover, the same complexity result holds if the input class Steiner graph additionally is embedded in a unit grid, if each vertex has degree at most three, and each class consists of no more than three vertices. For similar restricted versions, we prove MAX SNP-hardness and we show that there exists no polynomial-time approximation algorithm with a constant bound on the relative error, unless P = NP. We propose two efficient heuristics computing different approximate solutions in time 0(/E] + /VI log IV]) and in time O(c(lEl + IV1 log (VI)), respectively, where E is the set of edges in the given graph, V is the set of vertices, and c is the number of classes. We present some promising implementation results.",
"title": ""
}
] | scidocsrr |
ddd0bc1e647d084b60ab53d22620abc3 | Large-Scale Identification of Malicious Singleton Files | [
{
"docid": "87e583f3256576ffdd95853fc838a620",
"text": "The sheer volume of new malware found each day is growing at an exponential pace. This growth has created a need for automatic malware triage techniques that determine what malware is similar, what malware is unique, and why. In this paper, we present BitShred, a system for large-scale malware similarity analysis and clustering, and for automatically uncovering semantic inter- and intra-family relationships within clusters. The key idea behind BitShred is using feature hashing to dramatically reduce the high-dimensional feature spaces that are common in malware analysis. Feature hashing also allows us to mine correlated features between malware families and samples using co-clustering techniques. Our evaluation shows that BitShred speeds up typical malware triage tasks by up to 2,365x and uses up to 82x less memory on a single CPU, all with comparable accuracy to previous approaches. We also develop a parallelized version of BitShred, and demonstrate scalability within the Hadoop framework.",
"title": ""
}
] | [
{
"docid": "553719cb1cb8829ceaf8e0f1a40953ff",
"text": "“The distinctive faculties of Man are visibly expressed in his elevated cranial domeda feature which, though much debased in certain savage races, essentially characterises the human species. But, considering that the Neanderthal skull is eminently simial, both in its general and particular characters, I feel myself constrained to believe that the thoughts and desires which once dwelt within it never soared beyond those of a brute. The Andamaner, it is indisputable, possesses but the dimmest conceptions of the existence of the Creator of the Universe: his ideas on this subject, and on his own moral obligations, place him very little above animals of marked sagacity; nevertheless, viewed in connection with the strictly human conformation of his cranium, they are such as to specifically identify him with Homo sapiens. Psychical endowments of a lower grade than those characterising the Andamaner cannot be conceived to exist: they stand next to brute benightedness. (.) Applying the above argument to the Neanderthal skull, and considering . that it more closely conforms to the brain-case of the Chimpanzee, . there seems no reason to believe otherwise than that similar darkness characterised the being to which the fossil belonged” (King, 1864; pp. 96).",
"title": ""
},
{
"docid": "84903166bfeea7433e61c6992f637a25",
"text": "Sampling-based optimal planners, such as RRT*, almost-surely converge asymptotically to the optimal solution, but have provably slow convergence rates in high dimensions. This is because their commitment to finding the global optimum compels them to prioritize exploration of the entire problem domain even as its size grows exponentially. Optimization techniques, such as CHOMP, have fast convergence on these problems but only to local optima. This is because they are exploitative, prioritizing the immediate improvement of a path even though this may not find the global optimum of nonconvex cost functions. In this paper, we present a hybrid technique that integrates the benefits of both methods into a single search. A key insight is that applying local optimization to a subset of edges likely to improve the solution avoids the prohibitive cost of optimizing every edge in a global search. This is made possible by Batch Informed Trees (BIT*), an informed global technique that orders its search by potential solution quality. In our algorithm, Regionally Accelerated BIT* (RABIT*), we extend BIT* by using optimization to exploit local domain information and find alternative connections for edges in collision and accelerate the search. This improves search performance in problems with difficult-to-sample homotopy classes (e.g., narrow passages) while maintaining almost-sure asymptotic convergence to the global optimum. Our experiments on simulated random worlds and real data from an autonomous helicopter show that on certain difficult problems, RABIT* converges 1.8 times faster than BIT*. Qualitatively, in problems with difficult-to-sample homotopy classes, we show that RABIT* is able to efficiently transform paths to avoid obstacles.",
"title": ""
},
{
"docid": "adf9646a9c4c9e19a18f35a949d59f3d",
"text": "In this study we present a review of the emerging eld of meta-knowledge components as practised over the past decade among a variety of practitioners. We use the arti cially-de ned term `meta-knowledge' to encompass all those di erent but overlapping notions used by the Arti cial Intelligence and Software Engineering communities to represent reusable modelling frameworks: ontologies, problem-solving methods, experience factories and experience bases, patterns, to name a few. We then elaborate on how meta-knowledge is deployed in the context of system's design to improve its reliability by consistency checking, enhance its reuse potential, and manage its knowledge sharing. We speculate on its usefulness and explore technologies for supporting deployment of meta-knowledge. We argue that, despite the di erent approaches being followed in systems design by divergent communities, meta-knowledge is present in all cases, in a tacit or explicit form, and its utilisation depends on pragmatic aspects which we try to identify and critically review on criteria of e ectiveness. keywords: Ontologies, Problem-Solving Methods, Experienceware, Patterns, Design Types, Cost-E ective Analysis.",
"title": ""
},
{
"docid": "9c7bafb5279bca4deb90d603e8b59cfe",
"text": "BACKGROUND\nVirtual reality (VR) is an evolving technology that has been applied in various aspects of medicine, including the treatment of phobia disorders, pain distraction interventions, surgical training, and medical education. These applications have served to demonstrate the various assets offered through the use of VR.\n\n\nOBJECTIVE\nTo provide a background and rationale for the application of VR to neuropsychological assessment.\n\n\nMETHODS\nA brief introduction to VR technology and a review of current ongoing neuropsychological research that integrates the use of this technology.\n\n\nCONCLUSIONS\nVR offers numerous assets that may enhance current neuropsychological assessment protocols and address many of the limitations faced by our traditional methods.",
"title": ""
},
{
"docid": "cab0fd454701c0b302040a1875ab2865",
"text": "They are susceptible to a variety of attacks, including node capture, physical tampering, and denial of service, while prompting a range of fundamental research challenges.",
"title": ""
},
{
"docid": "78ae476295aa266a170a981a34767bdd",
"text": "Darwin did not focus on deception. Only a few sentences in his book mentioned the issue. One of them raised the very interesting question of whether it is difficult to voluntarily inhibit the emotional expressions that are most difficult to voluntarily fabricate. Another suggestion was that it would be possible to unmask a fabricated expression by the absence of the difficult-to-voluntarily-generate facial actions. Still another was that during emotion body movements could be more easily suppressed than facial expression. Research relevant to each of Darwin's suggestions is reviewed, as is other research on deception that Darwin did not foresee.",
"title": ""
},
{
"docid": "663925d096212c6ea6685db879581551",
"text": "Deep neural networks have shown promise in collaborative filtering (CF). However, existing neural approaches are either user-based or item-based, which cannot leverage all the underlying information explicitly. We propose CF-UIcA, a neural co-autoregressive model for CF tasks, which exploits the structural correlation in the domains of both users and items. The co-autoregression allows extra desired properties to be incorporated for different tasks. Furthermore, we develop an efficient stochastic learning algorithm to handle large scale datasets. We evaluate CF-UIcA on two popular benchmarks: MovieLens 1M and Netflix, and achieve state-of-the-art performance in both rating prediction and top-N recommendation tasks, which demonstrates the effectiveness of CF-UIcA.",
"title": ""
},
{
"docid": "4105ebe68ca25c863f77dde3ff94dcdc",
"text": "This paper deals with the increasingly important issue of proper handling of information security for electric power utilities. It is based on the efforts of CIGRE Joint Working Group (JWG) D2/B3/C2-01 on \"Security for Information Systems and Intranets in Electric Power System\" carried out between 2003 and 2006. The JWG has produced a technical brochure (TB), where the purpose to raise the awareness of information and cybersecurity in electric power systems, and gives some guidance on how to solve the security problem by focusing on security domain modeling, risk assessment methodology, and security framework building. Here in this paper, the focus is on the issue of awareness and to highlight some steps to achieve a framework for cybersecurity management. Also, technical considerations of some communication systems for substation automation are studied. Finally, some directions for further works in this vast area of information and cybersecurity are given.",
"title": ""
},
{
"docid": "aff44289b241cdeef627bba97b68a505",
"text": "Personalization is a ubiquitous phenomenon in our daily online experience. While such technology is critical for helping us combat the overload of information we face, in many cases, we may not even realize that our results are being tailored to our personal tastes and preferences. Worse yet, when such a system makes a mistake, we have little recourse to correct it.\n In this work, we propose a framework for addressing this problem by developing a new user-interpretable feature set upon which to base personalized recommendations. These features, which we call badges, represent fundamental traits of users (e.g., \"vegetarian\" or \"Apple fanboy\") inferred by modeling the interplay between a user's behavior and self-reported identity. Specifically, we consider the microblogging site Twitter, where users provide short descriptions of themselves in their profiles, as well as perform actions such as tweeting and retweeting. Our approach is based on the insight that we can define badges using high precision, low recall rules (e.g., \"Twitter profile contains the phrase 'Apple fanboy'\"), and with enough data, generalize to other users by observing shared behavior. We develop a fully Bayesian, generative model that describes this interaction, while allowing us to avoid the pitfalls associated with having positive-only data.\n Experiments on real Twitter data demonstrate the effectiveness of our model at capturing rich and interpretable user traits that can be used to provide transparency for personalization.",
"title": ""
},
{
"docid": "13177a7395eed80a77571bd02a962bc9",
"text": "Orexin-A and orexin-B are neuropeptides originally identified as endogenous ligands for two orphan G-protein-coupled receptors. Orexin neuropeptides (also known as hypocretins) are produced by a small group of neurons in the lateral hypothalamic and perifornical areas, a region classically implicated in the control of mammalian feeding behavior. Orexin neurons project throughout the central nervous system (CNS) to nuclei known to be important in the control of feeding, sleep-wakefulness, neuroendocrine homeostasis, and autonomic regulation. orexin mRNA expression is upregulated by fasting and insulin-induced hypoglycemia. C-fos expression in orexin neurons, an indicator of neuronal activation, is positively correlated with wakefulness and negatively correlated with rapid eye movement (REM) and non-REM sleep states. Intracerebroventricular administration of orexins has been shown to significantly increase food consumption, wakefulness, and locomotor activity in rodent models. Conversely, an orexin receptor antagonist inhibits food consumption. Targeted disruption of the orexin gene in mice produces a syndrome remarkably similar to human and canine narcolepsy, a sleep disorder characterized by excessive daytime sleepiness, cataplexy, and other pathological manifestations of the intrusion of REM sleep-related features into wakefulness. Furthermore, orexin knockout mice are hypophagic compared with weight and age-matched littermates, suggesting a role in modulating energy metabolism. These findings suggest that the orexin neuropeptide system plays a significant role in feeding and sleep-wakefulness regulation, possibly by coordinating the complex behavioral and physiologic responses of these complementary homeostatic functions.",
"title": ""
},
{
"docid": "05eb344fb8b671542f6f0228774a5524",
"text": "This paper presents an improved hardware structure for the computation of the Whirlpool hash function. By merging the round key computation with the data compression and by using embedded memories to perform part of the Galois Field (28) multiplication, a core can be implemented in just 43% of the area of the best current related art while achieving a 12% higher throughput. The proposed core improves the Throughput per Slice compared to the state of the art by 160%, achieving a throughput of 5.47 Gbit/s with 2110 slices and 32 BRAMs on a VIRTEX II Pro FPGA. Results for a real application are also presented by considering a polymorphic computational approach.",
"title": ""
},
{
"docid": "71d065cd109392ae41bc96fe0cd2e0f4",
"text": "Absence of an upper limb leads to severe impairments in everyday life, which can further influence the social and mental state. For these reasons, early developments in cosmetic and body-driven prostheses date some centuries ago, and they have been evolving ever since. Following the end of the Second World War, rapid developments in technology resulted in powered myoelectric hand prosthetics. In the years to come, these devices were common on the market, though they still suffered high user abandonment rates. The reasons for rejection were trifold - insufficient functionality of the hardware, fragile design, and cumbersome control. In the last decade, both academia and industry have reached major improvements concerning technical features of upper limb prosthetics and methods for their interfacing and control. Advanced robotic hands are offered by several vendors and research groups, with a variety of active and passive wrist options that can be articulated across several degrees of freedom. Nowadays, elbow joint designs include active solutions with different weight and power options. Control features are getting progressively more sophisticated, offering options for multiple sensor integration and multi-joint articulation. Latest developments in socket designs are capable of facilitating implantable and multiple surface electromyography sensors in both traditional and osseointegration-based systems. Novel surgical techniques in combination with modern, sophisticated hardware are enabling restoration of dexterous upper limb functionality. This article is aimed at reviewing the latest state of the upper limb prosthetic market, offering insights on the accompanying technologies and techniques. We also examine the capabilities and features of some of academia's flagship solutions and methods.",
"title": ""
},
{
"docid": "359d3e06c221e262be268a7f5b326627",
"text": "A method for the synthesis of multicoupled resonators filters with frequency-dependent couplings is presented. A circuit model of the filter that accurately represents the frequency responses over a very wide frequency band is postulated. The two-port parameters of the filter based on the circuit model are obtained by circuit analysis. The values of the circuit elements are synthesized by equating the two-port parameters obtained from the circuit analysis and the filtering characteristic function. Solutions similar to the narrowband case (where all the couplings are assumed frequency independent) are obtained analytically when all coupling elements are either inductive or capacitive. The synthesis technique is generalized to include all types of coupling elements. Several examples of wideband filters are given to demonstrate the synthesis techniques.",
"title": ""
},
{
"docid": "c6f17a0d5f91c3cab9183bbc5fa2dfc3",
"text": "In human beings, head is one of the most important parts. Injuries in this part can cause serious damages to overall health. In some cases, they can be fatal. The present paper analyses the deformations of a helmet mounted on a human head, using finite element method. It studies the amount of von Mises pressure and stress caused by a vertical blow from above on the skull. The extant paper aims at developing new methods for improving the design and achieving more energy absorption by applying more appropriate models. In this study, a thermoplastic damper is applied and modelled in order to reduce the amount of energy transferred to the skull and to minimize the damages inflicted on human head.",
"title": ""
},
{
"docid": "a87b48ee446cbda34e8d878cffbd19bb",
"text": "Introduction. In spite of significant changes in the management policies of intersexuality, clinical evidence show that not all pubertal or adult individuals live according to the assigned sex during infancy. Aim. The purpose of this study was to analyze the clinical management of an individual diagnosed as a female pseudohermaphrodite with congenital adrenal hyperplasia (CAH) simple virilizing form four decades ago but who currently lives as a monogamous heterosexual male. Methods. We studied the clinical files spanning from 1965 to 1991 of an intersex individual. In addition, we conducted a magnetic resonance imaging (MRI) study of the abdominoplevic cavity and a series of interviews using the oral history method. Main Outcome Measures. Our analysis is based on the clinical evidence that led to the CAH diagnosis in the 1960s in light of recent clinical testing to confirm such diagnosis. Results. Analysis of reported values for 17-ketosteroids, 17-hydroxycorticosteroids, from 24-hour urine samples during an 8-year period showed poor adrenal suppression in spite of adherence to treatment. A recent MRI study confirmed the presence of hyperplastic adrenal glands as well as the presence of a prepubertal uterus. Semistructured interviews with the individual confirmed a life history consistent with a male gender identity. Conclusions. Although the American Academy of Pediatrics recommends that XX intersex individuals with CAH should be assigned to the female sex, this practice harms some individuals as they may self-identify as males. In the absence of comorbid psychiatric factors, the discrepancy between infant sex assignment and gender identity later in life underlines the need for a reexamination of current standards of care for individuals diagnosed with CAH. Jorge JC, Echeverri C, Medina Y, and Acevedo P. Male gender identity in an xx individual with congenital adrenal hyperplasia. J Sex Med 2008;5:122–131.",
"title": ""
},
{
"docid": "8f5a38fe598abc5f3bdc3fd01fb506b3",
"text": "Existing region-based object detectors are limited to regions with fixed box geometry to represent objects, even if those are highly non-rectangular. In this paper we introduce DP-FCN, a deep model for object detection which explicitly adapts to shapes of objects with deformable parts. Without additional annotations, it learns to focus on discriminative elements and to align them, and simultaneously brings more invariance for classification and geometric information to refine localization. DP-FCN is composed of three main modules: a Fully Convolutional Network to efficiently maintain spatial resolution, a deformable part-based RoI pooling layer to optimize positions of parts and build invariance, and a deformation-aware localization module explicitly exploiting displacements of parts to improve accuracy of bounding box regression. We experimentally validate our model and show significant gains. DP-FCN achieves state-of-the-art performances of 83.1% and 80.9% on PASCAL VOC 2007 and 2012 with VOC data only.",
"title": ""
},
{
"docid": "eb271acef996a9ba0f84a50b5055953b",
"text": "Makeup is widely used to improve facial attractiveness and is well accepted by the public. However, different makeup styles will result in significant facial appearance changes. It remains a challenging problem to match makeup and non-makeup face images. This paper proposes a learning from generation approach for makeup-invariant face verification by introducing a bi-level adversarial network (BLAN). To alleviate the negative effects from makeup, we first generate non-makeup images from makeup ones, and then use the synthesized nonmakeup images for further verification. Two adversarial networks in BLAN are integrated in an end-to-end deep network, with the one on pixel level for reconstructing appealing facial images and the other on feature level for preserving identity information. These two networks jointly reduce the sensing gap between makeup and non-makeup images. Moreover, we make the generator well constrained by incorporating multiple perceptual losses. Experimental results on three benchmark makeup face datasets demonstrate that our method achieves state-of-the-art verification accuracy across makeup status and can produce photo-realistic non-makeup",
"title": ""
},
{
"docid": "dc1053623155e38f00bf70d7da145d5b",
"text": "Genetic programming is combined with program analysis methods to repair bugs in off-the-shelf legacy C programs. Fitness is defined using negative test cases that exercise the bug to be repaired and positive test cases that encode program requirements. Once a successful repair is discovered, structural differencing algorithms and delta debugging methods are used to minimize its size. Several modifications to the GP technique contribute to its success: (1) genetic operations are localized to the nodes along the execution path of the negative test case; (2) high-level statements are represented as single nodes in the program tree; (3) genetic operators use existing code in other parts of the program, so new code does not need to be invented. The paper describes the method, reviews earlier experiments that repaired 11 bugs in over 60,000 lines of code, reports results on new bug repairs, and describes experiments that analyze the performance and efficacy of the evolutionary components of the algorithm.",
"title": ""
},
{
"docid": "b43b1265aa990052a238f63991730cc7",
"text": "This paper focuses on placement and chaining of virtualized network functions (VNFs) in Network Function Virtualization Infrastructures (NFVI) for emerging software networks serving multiple tenants. Tenants can request network services to the NFVI in the form of service function chains (in the IETF SFC sense) or VNF Forwarding Graphs (VNF-FG in the case of ETSI) in support of their applications and business. This paper presents efficient algorithms to provide solutions to this NP-Hard chain placement problem to support NFVI providers. Cost-efficient and improved scalability multi-stage graph and 2-Factor algorithms are presented and shown to find near-optimal solutions in few seconds for large instances.",
"title": ""
},
{
"docid": "d99fdf7b559d5609bec3c179dee3cd58",
"text": "This study aimed to describe dietary habits of Syrian adolescents attending secondary schools in Damascus and the surrounding areas. A descriptive, cross-sectional study was carried out on 3507 students in 2001. A stratified, 2-stage random cluster sample was used to sample the students. The consumption pattern of food items during the previous week was described. More than 50% of the students said that they had not consumed green vegetables and more than 35% had not consumed meat. More than 35% said that they consumed cheese and milk at least once a day. Only 11.8% consumed fruit 3 times or more daily. Potential determinants of the pattern of food consumption were arialysed. Weight control practices and other eating habits were also described.",
"title": ""
}
] | scidocsrr |
9ed69e982cc40429518a3be5270ec540 | Population validity for educational data mining models: A case study in affect detection | [
{
"docid": "892c75c6b719deb961acfe8b67b982bb",
"text": "Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. Two research communities -- Educational Data Mining (EDM) and Learning Analytics and Knowledge (LAK) have developed separately to address this need. This paper argues for increased and formal communication and collaboration between these communities in order to share research, methods, and tools for data mining and analysis in the service of developing both LAK and EDM fields.",
"title": ""
}
] | [
{
"docid": "ffd45fa5cd9c2ce6b4dc7c5433864fd4",
"text": "AIM\nTo evaluate validity of the Greek version of a global measure of perceived stress PSS-14 (Perceived Stress Scale - 14 item).\n\n\nMATERIALS AND METHODS\nThe original PSS-14 (theoretical range 0-56) was translated into Greek and then back-translated. One hundred men and women (39 +/- 10 years old, 40 men) participated in the validation process. Firstly, participants completed the Greek PSS-14 and, then they were interviewed by a psychologist specializing in stress management. Cronbach's alpha (a) evaluated internal consistency of the measurement, whereas Kendall's tau-b and Bland & Altman methods assessed consistency with the clinical evaluation. Exploratory and Confirmatory Factor analyses were conducted to reveal hidden factors within the data and to confirm the two-dimensional character of the scale.\n\n\nRESULTS\nMean (SD) PSS-14 score was 25(7.9). Strong internal consistency (Cronbach's alpha = 0.847) as well as moderate-to-good concordance between clinical assessment and PSS-14 (Kendall's tau-b = 0.43, p < 0.01) were observed. Two factors were extracted. Factor one explained 34.7% of variability and was heavily laden by positive items, and factor two that explained 10.6% of the variability by negative items. Confirmatory factor analysis revealed that the model with 2 factors had chi-square equal to 241.23 (p < 0.001), absolute fix indexes were good (i.e. GFI = 0.733, AGFI = 0.529), and incremental fix indexes were also adequate (i.e. NFI = 0.89 and CFI = 0.92).\n\n\nCONCLUSION\nThe developed Greek version of PSS-14 seems to be a valid instrument for the assessment of perceived stress in the Greek adult population living in urban areas; a finding that supports its local use in research settings as an evaluation tool measuring perceived stress, mainly as a risk factor but without diagnostic properties.",
"title": ""
},
{
"docid": "340a2fd43f494bb1eba58629802a738c",
"text": "A new image decomposition scheme, called the adaptive directional total variation (ADTV) model, is proposed to achieve effective segmentation and enhancement for latent fingerprint images in this work. The proposed model is inspired by the classical total variation models, but it differentiates itself by integrating two unique features of fingerprints; namely, scale and orientation. The proposed ADTV model decomposes a latent fingerprint image into two layers: cartoon and texture. The cartoon layer contains unwanted components (e.g., structured noise) while the texture layer mainly consists of the latent fingerprint. This cartoon-texture decomposition facilitates the process of segmentation, as the region of interest can be easily detected from the texture layer using traditional segmentation methods. The effectiveness of the proposed scheme is validated through experimental results on the entire NIST SD27 latent fingerprint database. The proposed scheme achieves accurate segmentation and enhancement results, leading to improved feature detection and latent matching performance.",
"title": ""
},
{
"docid": "f70bd0a47eac274a1bb3b964f34e0a63",
"text": "Although deep neural network (DNN) has achieved many state-of-the-art results, estimating the uncertainty presented in the DNN model and the data is a challenging task. Problems related to uncertainty such as classifying unknown classes (class which does not appear in the training data) data as known class with high confidence, is critically concerned in the safety domain area (e.g, autonomous driving, medical diagnosis). In this paper, we show that applying current Bayesian Neural Network (BNN) techniques alone does not effectively capture the uncertainty. To tackle this problem, we introduce a simple way to improve the BNN by using one class classification (in this paper, we use the term ”set classification” instead). We empirically show the result of our method on an experiment which involves three datasets: MNIST, notMNIST and FMNIST.",
"title": ""
},
{
"docid": "2e93d2ba94e0c468634bf99be76706bb",
"text": "Entheses are sites where tendons, ligaments, joint capsules or fascia attach to bone. Inflammation of the entheses (enthesitis) is a well-known hallmark of spondyloarthritis (SpA). As entheses are associated with adjacent, functionally related structures, the concepts of an enthesis organ and functional entheses have been proposed. This is important in interpreting imaging findings in entheseal-related diseases. Conventional radiographs and CT are able to depict the chronic changes associated with enthesitis but are of very limited use in early disease. In contrast, MRI is sensitive for detecting early signs of enthesitis and can evaluate both soft-tissue changes and intraosseous abnormalities of active enthesitis. It is therefore useful for the early diagnosis of enthesitis-related arthropathies and monitoring therapy. Current knowledge and typical MRI features of the most commonly involved entheses of the appendicular skeleton in patients with SpA are reviewed. The MRI appearances of inflammatory and degenerative enthesopathy are described. New options for imaging enthesitis, including whole-body MRI and high-resolution microscopy MRI, are briefly discussed.",
"title": ""
},
{
"docid": "6f13d2d8e511f13f6979859a32e68fdd",
"text": "As an innovative measurement technique, the so-called Fiber Bragg Grating (FBG) sensors are used to measure local and global strains in a growing number of application scenarios. FBGs facilitate a reliable method to sense strain over large distances and in explosive atmospheres. Currently, there is only little knowledge available concerning mechanical properties of FGBs, e.g. under quasi-static, cyclic and thermal loads. To address this issue, this work quantifies typical loads on FGB sensors in operating state and moreover aims to determine their mechanical response resulting from certain load cases. Copyright © 2013 IFSA.",
"title": ""
},
{
"docid": "2dde173faac8d5cbb63aed8d379308fa",
"text": "Delineating infarcted tissue in ischemic stroke lesions is crucial to determine the extend of damage and optimal treatment for this life-threatening condition. However, this problem remains challenging due to high variability of ischemic strokes’ location and shape. Recently, fully-convolutional neural networks (CNN), in particular those based on U-Net [27], have led to improved performances for this task [7]. In this work, we propose a novel architecture that improves standard U-Net based methods in three important ways. First, instead of combining the available image modalities at the input, each of them is processed in a different path to better exploit their unique information. Moreover, the network is densely-connected (i.e., each layer is connected to all following layers), both within each path and across different paths, similar to HyperDenseNet [11]. This gives our model the freedom to learn the scale at which modalities should be processed and combined. Finally, inspired by the Inception architecture [32], we improve standard U-Net modules by extending inception modules with two convolutional blocks with dilated convolutions of different scale. This helps handling the variability in lesion sizes. We split the 93 stroke datasets into training and validation sets containing 83 and 9 examples respectively. Our network was trained on a NVidia TITAN XP GPU with 16 GBs RAM, using ADAM as optimizer and a learning rate of 1×10−5 during 200 epochs. Training took around 5 hours and segmentation of a whole volume took between 0.2 and 2 seconds, as average. The performance on the test set obtained by our method is compared to several baselines, to demonstrate the effectiveness of our architecture, and to a state-of-art architecture that employs factorized dilated convolutions, i.e., ERFNet [26].",
"title": ""
},
{
"docid": "ed0f70e6e53666a6f5562cfb082a9a9a",
"text": "Biometrics aims at reliable and robust identification of humans from their personal traits, mainly for security and authentication purposes, but also for identifying and tracking the users of smarter applications. Frequently considered modalities are fingerprint, face, iris, palmprint and voice, but there are many other possible biometrics, including gait, ear image, retina, DNA, and even behaviours. This chapter presents a survey of machine learning methods used for biometrics applications, and identifies relevant research issues. We focus on three areas of interest: offline methods for biometric template construction and recognition, information fusion methods for integrating multiple biometrics to obtain robust results, and methods for dealing with temporal information. By introducing exemplary and influential machine learning approaches in the context of specific biometrics applications, we hope to provide the reader with the means to create novel machine learning solutions to challenging biometrics problems.",
"title": ""
},
{
"docid": "4b051e3908eabb5f550094ebabf6583d",
"text": "This paper presents a review of modern cooling system employed for the thermal management of power traction machines. Various solutions for heat extractions are described: high thermal conductivity insulation materials, spray cooling, high thermal conductivity fluids, combined liquid and air forced convection, and loss mitigation techniques.",
"title": ""
},
{
"docid": "9cad66a6f3cfb1112a4072de71c6de3e",
"text": "This paper presents a novel method for position sensorless control of high-speed brushless DC motors with low inductance and nonideal back electromotive force (EMF) in order to improve the reliability of the motor system of a magnetically suspended control moment gyro for space application. The commutation angle error of the traditional line-to-line voltage zero-crossing points detection method is analyzed. Based on the characteristics measurement of the nonideal back EMF, a two-stage commutation error compensation method is proposed to achieve the high-reliable and high-accurate commutation in the operating speed region of the proposed sensorless control process. The commutation angle error is compensated by the transformative line voltages, the hysteresis comparators, and the appropriate design of the low-pass filters in the low-speed and high-speed region, respectively. High-precision commutations are achieved especially in the high-speed region to decrease the motor loss in steady state. The simulated and experimental results show that the proposed method can achieve an effective compensation effect in the whole operating speed region.",
"title": ""
},
{
"docid": "beba751220fc4f8df7be8d8e546150d0",
"text": "Theoretical analysis and implementation of autonomous staircase detection and stair climbing algorithms on a novel rescue mobile robot are presented in this paper. The main goals are to find the staircase during navigation and to implement a fast, safe and smooth autonomous stair climbing algorithm. Silver is used here as the experimental platform. This tracked mobile robot is a tele-operative rescue mobile robot with great capabilities in climbing obstacles in destructed areas. Its performance has been demonstrated in rescue robot league of international RoboCup competitions. A fuzzy controller is applied to direct the robot during stair climbing. Controller inputs are generated by processing the range data from two LASER range finders which scan the environment one horizontally and the other vertically. The experimental results of stair detection algorithm and stair climbing controller are demonstrated at the end.",
"title": ""
},
{
"docid": "817f9509afcdbafc60ecac2d0b8ef02d",
"text": "Abstract—In most regards, the twenty-first century may not bring revolutionary changes in electronic messaging technology in terms of applications or protocols. Security issues that have long been a concern in messaging application are finally being solved using a variety of products. Web-based messaging systems are rapidly evolving the text-based conversation. The users have the right to protect their privacy from the eavesdropper, or other parties which interferes the privacy of the users for such purpose. The chatters most probably use the instant messages to chat with others for personal issue; in which no one has the right eavesdrop the conversation channel and interfere this privacy. This is considered as a non-ethical manner and the privacy of the users should be protected. The author seeks to identify the security features for most public instant messaging services used over the internet and suggest some solutions in order to encrypt the instant messaging over the conversation channel. The aim of this research is to investigate through forensics and sniffing techniques, the possibilities of hiding communication using encryption to protect the integrity of messages exchanged. Authors used different tools and methods to run the investigations. Such tools include Wireshark packet sniffer, Forensics Tool Kit (FTK) and viaForensic mobile forensic toolkit. Finally, authors will report their findings on the level of security that encryption could provide to instant messaging services.",
"title": ""
},
{
"docid": "90dd589be3f8f78877367486e0f66e11",
"text": "Patch-level descriptors underlie several important computer vision tasks, such as stereo-matching or content-based image retrieval. We introduce a deep convolutional architecture that yields patch-level descriptors, as an alternative to the popular SIFT descriptor for image retrieval. The proposed family of descriptors, called Patch-CKN, adapt the recently introduced Convolutional Kernel Network (CKN), an unsupervised framework to learn convolutional architectures. We present a comparison framework to benchmark current deep convolutional approaches along with Patch-CKN for both patch and image retrieval, including our novel \"RomePatches\" dataset. Patch-CKN descriptors yield competitive results compared to supervised CNN alternatives on patch and image retrieval.",
"title": ""
},
{
"docid": "29a2c5082cf4db4f4dde40f18c88ca85",
"text": "Human astrocytes are larger and more complex than those of infraprimate mammals, suggesting that their role in neural processing has expanded with evolution. To assess the cell-autonomous and species-selective properties of human glia, we engrafted human glial progenitor cells (GPCs) into neonatal immunodeficient mice. Upon maturation, the recipient brains exhibited large numbers and high proportions of both human glial progenitors and astrocytes. The engrafted human glia were gap-junction-coupled to host astroglia, yet retained the size and pleomorphism of hominid astroglia, and propagated Ca2+ signals 3-fold faster than their hosts. Long-term potentiation (LTP) was sharply enhanced in the human glial chimeric mice, as was their learning, as assessed by Barnes maze navigation, object-location memory, and both contextual and tone fear conditioning. Mice allografted with murine GPCs showed no enhancement of either LTP or learning. These findings indicate that human glia differentially enhance both activity-dependent plasticity and learning in mice.",
"title": ""
},
{
"docid": "f4cbdcdb55e2bf49bcc62a79293f19b7",
"text": "Network slicing for 5G provides Network-as-a-Service (NaaS) for different use cases, allowing network operators to build multiple virtual networks on a shared infrastructure. With network slicing, service providers can deploy their applications and services flexibly and quickly to accommodate diverse services’ specific requirements. As an emerging technology with a number of advantages, network slicing has raised many issues for the industry and academia alike. Here, the authors discuss this technology’s background and propose a framework. They also discuss remaining challenges and future research directions.",
"title": ""
},
{
"docid": "029c5753adfbdcbfc38b92fbcc7f7e5c",
"text": "The Internet of Things (IoT) is the latest evolution of the Internet, encompassing an enormous number of connected physical \"things.\" The access-control oriented (ACO) architecture was recently proposed for cloud-enabled IoT, with virtual objects (VOs) and cloud services in the middle layers. A central aspect of ACO is to control communication among VOs. This paper develops operational and administrative access control models for this purpose, assuming topic-based publishsubscribe interaction among VOs. Operational models are developed using (i) access control lists for topics and capabilities for virtual objects and (ii) attribute-based access control, and it is argued that role-based access control is not suitable for this purpose. Administrative models for these two operational models are developed using (i) access control lists, (ii) role-based access control, and (iii) attribute-based access control. A use case illustrates the details of these access control models for VO communication, and their differences. An assessment of these models with respect to security and privacy preserving objectives of IoT is also provided.",
"title": ""
},
{
"docid": "9fd56a2261ade748404fcd0c6302771a",
"text": "Despite limited scientific knowledge, stretching of human skeletal muscle to improve flexibility is a widespread practice among athletes. This article reviews recent findings regarding passive properties of the hamstring muscle group during stretch based on a model that was developed which could synchronously and continuously measure passive hamstring resistance and electromyographic activity, while the velocity and angle of stretch was controlled. Resistance to stretch was defined as passive torque (Nm) offered by the hamstring muscle group during passive knee extension using an isokinetic dynamometer with a modified thigh pad. To simulate a clinical static stretch, the knee was passively extended to a pre-determined final position (0.0875 rad/s, dynamic phase) where it remained stationary for 90 s (static phase). Alternatively, the knee was extended to the point of discomfort (stretch tolerance). From the torque-angle curve of the dynamic phase of the static stretch, and in the stretch tolerance protocol, passive energy and stiffness were calculated. Torque decline in the static phase was considered to represent viscoelastic stress relaxation. Using the model, studies were conducted which demonstrated that a single static stretch resulted in a 30% viscoelastic stress relaxation. With repeated stretches muscle stiffness declined, but returned to baseline values within 1 h. Long-term stretching (3 weeks) increased joint range of motion as a result of a change in stretch tolerance rather than in the passive properties. Strength training resulted in increased muscle stiffness, which was unaffected by daily stretching. The effectiveness of different stretching techniques was attributed to a change in stretch tolerance rather than passive properties. Inflexible and older subjects have increased muscle stiffness, but a lower stretch tolerance compared to subjects with normal flexibility and younger subjects, respectively. Although far from all questions regarding the passive properties of humans skeletal muscle have been answered in these studies, the measurement technique permitted some initial important examinations of vicoelastic behavior of human skeletal muscle.",
"title": ""
},
{
"docid": "2d94f76a2c79b36c3fa8aeaf3f574bbd",
"text": "In this paper I discuss the role of Machine Learning (ML) in sound design. I focus on the modelling of a particular aspect of human intelligence which is believed to play an important role in musical creativity: the Generalisation of Perceptual Attributes (GPA). By GPA I mean the process by which a listener tries to find common sound attributes when confronted with a series of sounds. The paper introduces the basics of GPA and ML in the context of ARTIST, a prototype case study system. ARTIST (Artificial Intelligence Sound Tools) is a sound design system that works in co-operation with the user, providing useful levels of automated reasoning to render the synthesis tasks less laborious (tasks such as calculating an appropriate stream of synthesis parameters for each single sound) and to enable the user to explore alternatives when designing a certain sound. The system synthesises sounds from input requests in a relatively high-level language; for instance, using attribute-value expressions such as \"normal vibrato\", \"high openness\" and \"sharp attack\". ARTIST stores information about sounds as clusters of attribute-value expressions and has the ability to interpret these expressions in the lower-level terms of sound synthesis algorithms. The user may, however, be interested in producing a sound which is \"unknown\" to the system. In this case, the system will attempt to compute the attribute values for this yet unknown sound by making analogies with other known sounds which have similar constituents. ARTIST uses ML to infer which sound attributes should be considered to make the analogies.",
"title": ""
},
{
"docid": "20f6a794edae8857a04036afc84f532e",
"text": "Genetic algorithms play a significant role, as search techniques forhandling complex spaces, in many fields such as artificial intelligence, engineering, robotic, etc. Genetic algorithms are based on the underlying genetic process in biological organisms and on the naturalevolution principles of populations. These algorithms process apopulation of chromosomes, which represent search space solutions,with three operations: selection, crossover and mutation. Under its initial formulation, the search space solutions are coded using the binary alphabet. However, the good properties related with these algorithms do not stem from the use of this alphabet; other coding types have been considered for the representation issue, such as real coding, which would seem particularly natural when tackling optimization problems of parameters with variables in continuous domains. In this paper we review the features of real-coded genetic algorithms. Different models of genetic operators and some mechanisms available for studying the behaviour of this type of genetic algorithms are revised and compared.",
"title": ""
},
{
"docid": "91713d85bdccb2c06d7c50365bd7022c",
"text": "A 1 Mbit MRAM, a nonvolatile memory that uses magnetic tunnel junction (MJT) storage elements, has been characterized for total ionizing dose (TID) and single event latchup (SEL). Our results indicate that these devices show no single event latchup up to an effective LET of 84 MeV-cm2/mg (where our testing ended) and no bit failures to a TID of 75 krad (Si).",
"title": ""
},
{
"docid": "503756888df43d745e4fb5051f8855fb",
"text": "The widespread use of email has raised serious privacy concerns. A critical issue is how to prevent email information leaks, i.e., when a message is accidentally addressed to non-desired recipients. This is an increasingly common problem that can severely harm individuals and corporations — for instance, a single email leak can potentially cause expensive law suits, brand reputation damage, negotiation setbacks and severe financial losses. In this paper we present the first attempt to solve this problem. We begin by redefining it as an outlier detection task, where the unintended recipients are the outliers. Then we combine real email examples (from the Enron Corpus) with carefully simulated leak-recipients to learn textual and network patterns associated with email leaks. This method was able to detect email leaks in almost 82% of the test cases, significantly outperforming all other baselines. More importantly, in a separate set of experiments we applied the proposed method to the task of finding real cases of email leaks. The result was encouraging: a variation of the proposed technique was consistently successful in finding two real cases of email leaks. Not only does this paper introduce the important problem of email leak detection, but also presents an effective solution that can be easily implemented in any email client — with no changes in the email server side.",
"title": ""
}
] | scidocsrr |
32808d28ff8781af0fba70b60890a6f5 | Accurate Continuous Sweeping Framework in Indoor Spaces With Backpack Sensor System for Applications to 3-D Mapping | [
{
"docid": "c5cc4da2906670c30fc0bac3040217bd",
"text": "Many popular problems in robotics and computer vision including various types of simultaneous localization and mapping (SLAM) or bundle adjustment (BA) can be phrased as least squares optimization of an error function that can be represented by a graph. This paper describes the general structure of such problems and presents g2o, an open-source C++ framework for optimizing graph-based nonlinear error functions. Our system has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. We provide evaluations on a wide range of real-world and simulated datasets. The results demonstrate that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.",
"title": ""
},
{
"docid": "d7f50c4b31e14f80fd84b3488f318539",
"text": "We propose a novel 6-degree-of-freedom (DoF) visual simultaneous localization and mapping (SLAM) method based on the structural regularity of man-made building environments. The idea is that we use the building structure lines as features for localization and mapping. Unlike other line features, the building structure lines encode the global orientation information that constrains the heading of the camera over time, eliminating the accumulated orientation errors and reducing the position drift in consequence. We extend the standard extended Kalman filter visual SLAM method to adopt the building structure lines with a novel parameterization method that represents the structure lines in dominant directions. Experiments have been conducted in both synthetic and real-world scenes. The results show that our method performs remarkably better than the existing methods in terms of position error and orientation error. In the test of indoor scenes of the public RAWSEEDS data sets, with the aid of a wheel odometer, our method produces bounded position errors about 0.79 m along a 967-m path although no loop-closing algorithm is applied.",
"title": ""
},
{
"docid": "93fe562da15b8babc98fb2c10d0f1082",
"text": "In this paper we address the problem of estimating the intrinsic parameters of a 3D LIDAR while at the same time computing its extrinsic calibration with respect to a rigidly connected camera. Existing approaches to solve this nonlinear estimation problem are based on iterative minimization of nonlinear cost functions. In such cases, the accuracy of the resulting solution hinges on the availability of a precise initial estimate, which is often not available. In order to address this issue, we divide the problem into two least-squares sub-problems, and analytically solve each one to determine a precise initial estimate for the unknown parameters. We further increase the accuracy of these initial estimates by iteratively minimizing a batch nonlinear least-squares cost function. In addition, we provide the minimal identifiability conditions, under which it is possible to accurately estimate the unknown parameters. Experimental results consisting of photorealistic 3D reconstruction of indoor and outdoor scenes, as well as standard metrics of the calibration errors, are used to assess the validity of our approach.",
"title": ""
}
] | [
{
"docid": "62bf93deeb73fab74004cb3ced106bac",
"text": "Since the publication of the Design Patterns book, a large number of object-oriented design patterns have been identified and codified. As part of the pattern form, objectoriented design patterns must indicate their relationships with other patterns, but these relationships are typically described very briefly, and different collections of patterns describe different relationships in different ways. In this paper we describe and classify the common relationships between object oriented design patterns. Practitioners can use these relationships to help them identity those patterns which may be applicable to a particular problem, and pattern writers can use these relationships to help them integrate new patterns into the body of the patterns literature.",
"title": ""
},
{
"docid": "fdff78b32803eb13904c128d8e011ea8",
"text": "The task of identifying when to take a conversational turn is an important function of spoken dialogue systems. The turn-taking system should also ideally be able to handle many types of dialogue, from structured conversation to spontaneous and unstructured discourse. Our goal is to determine how much a generalized model trained on many types of dialogue scenarios would improve on a model trained only for a specific scenario. To achieve this goal we created a large corpus of Wizard-of-Oz conversation data which consisted of several different types of dialogue sessions, and then compared a generalized model with scenario-specific models. For our evaluation we go further than simply reporting conventional metrics, which we show are not informative enough to evaluate turn-taking in a real-time system. Instead, we process results using a performance curve of latency and false cut-in rate, and further improve our model's real-time performance using a finite-state turn-taking machine. Our results show that the generalized model greatly outperformed the individual model for attentive listening scenarios but was worse in job interview scenarios. This implies that a model based on a large corpus is better suited to conversation which is more user-initiated and unstructured. We also propose that our method of evaluation leads to more informative performance metrics in a real-time system.",
"title": ""
},
{
"docid": "f94764347d07af17cd034e40be54bc4a",
"text": "Device level Self-Heating (SH) is becoming a limiting factor during traditional DC Hot Carrier stresses in bulk and SOI technologies. Consideration is given to device layout and design for Self-Heating minimization during HCI stress in SOI technologies, the effect of SH on activation energy (Ea) and the SH induced enhancement to degradation. Applying a methodology for SH temperature correction of extracted device lifetime, correlation is established between DC device level stress and AC device stress using a specially designed ring oscillator.",
"title": ""
},
{
"docid": "11578b2cd8be05e0162528b403b7caf3",
"text": "The aims of this paper are threefold. First we highlight the usefulness of generalized linear mixed models (GLMMs) in the modelling of portfolio credit default risk. The GLMM-setting allows for a flexible specification of the systematic portfolio risk in terms of observed fixed effects and unobserved random effects, in order to explain the phenomena of default dependence and time-inhomogeneity in empirical default data. Second we show that computational Bayesian techniques such as the Gibbs sampler can be successfully applied to fit models with serially correlated random effects, which are special instances of state space models. Third we provide an empirical study using Standard & Poor’s data on US firms. A model incorporating rating category and sector effects and a macroeconomic proxy variable for state-ofthe-economy suggests the presence of a residual, cyclical, latent component in the systematic risk.",
"title": ""
},
{
"docid": "9c41df95c11ec4bed3e0b19b20f912bb",
"text": "Text mining has been defined as “the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources” [6]. Many other industries and areas can also benefit from the text mining tools that are being developed by a number of companies. This paper provides an overview of the text mining tools and technologies that are being developed and is intended to be a guide for organizations who are looking for the most appropriate text mining techniques for their situation. This paper also concentrates to design text and data mining tool to extract the valuable information from curriculum vitae according to concerned requirements. The tool clusters the curriculum vitae into several segments which will help the public and private concerns for their recruitment. Rule based approach is used to develop the algorithm for mining and also it is implemented to extract the valuable information from the curriculum vitae on the web. Analysis of Curriculum vitae is until now, a costly and manual activity. It is subject to all typical variations and limitations in its quality, depending of who is doing it. Automating this analysis using algorithms might deliver much more consistency and preciseness to support the human experts. The experiments involve cooperation with many people having their CV online, as well as several recruiters etc. The algorithms must be developed and improved for processing of existing sets of semi-structured documents information retrieval under uncertainity about quality of the sources.",
"title": ""
},
{
"docid": "873c2e7774791417d6cb4f5904cde74c",
"text": "This article discusses empirical findings and conceptual elaborations of the last 10 years in strategic niche management research (SNM). The SNM approach suggests that sustainable innovation journeys can be facilitated by creating technological niches, i.e. protected spaces that allow the experimentation with the co-evolution of technology, user practices, and regulatory structures. The assumption was that if such niches were constructed appropriately, they would act as building blocks for broader societal changes towards sustainable development. The article shows how concepts and ideas have evolved over time and new complexities were introduced. Research focused on the role of various niche-internal processes such as learning, networking, visioning and the relationship between local projects and global rule sets that guide actor behaviour. The empirical findings showed that the analysis of these niche-internal dimensions needed to be complemented with attention to niche external processes. In this respect, the multi-level perspective proved useful for contextualising SNM. This contextualisation led to modifications in claims about the dynamics of sustainable innovation journeys. Niches are to be perceived as crucial for bringing about regime shifts, but they cannot do this on their own. Linkages with ongoing external processes are also important. Although substantial insights have been gained, the SNM approach is still an unfinished research programme. We identify various promising research directions, as well as policy implications.",
"title": ""
},
{
"docid": "282424d3a055bcc2d0d5c99c6f8e58e9",
"text": "Over the last few years, neuroimaging techniques have contributed greatly to the identification of the structural and functional neuroanatomy of anxiety disorders. The amygdala seems to be a crucial structure for fear and anxiety, and has consistently been found to be activated in anxiety-provoking situations. Apart from the amygdala, the insula and anterior cinguiate cortex seem to be critical, and ail three have been referred to as the \"fear network.\" In the present article, we review the main findings from three major lines of research. First, we examine human models of anxiety disorders, including fear conditioning studies and investigations of experimentally induced panic attacks. Then we turn to research in patients with anxiety disorders and take a dose look at post-traumatic stress disorder and obsessive-compulsive disorder. Finally, we review neuroimaging studies investigating neural correlates of successful treatment of anxiety, focusing on exposure-based therapy and several pharmacological treatment options, as well as combinations of both.",
"title": ""
},
{
"docid": "17d06584c35a9879b0bd4b653ff64b40",
"text": "We present a solution to the rolling shutter (RS) absolute camera pose problem with known vertical direction. Our new solver, R5Pup, is an extension of the general minimal solution R6P, which uses a double linearized RS camera model initialized by the standard perspective P3P. Here, thanks to using known vertical directions, we avoid double linearization and can get the camera absolute pose directly from the RS model without the initialization by a standard P3P. Moreover, we need only five 2D-to-3D matches while R6P needed six such matches. We demonstrate in simulated and real experiments that our new R5Pup is robust, fast and a very practical method for absolute camera pose computation for modern cameras on mobile devices. We compare our R5Pup to the state of the art RS and perspective methods and demonstrate that it outperforms them when vertical direction is known in the range of accuracy available on modern mobile devices. We also demonstrate that when using R5Pup solver in structure from motion (SfM) pipelines, it is better to transform already reconstructed scenes into the standard position, rather than using hard constraints on the verticality of up vectors.",
"title": ""
},
{
"docid": "ffba4650ec3349c096c35779775d350d",
"text": "Massively parallel short-read sequencing technologies, coupled with powerful software platforms, are enabling investigators to analyse tens of thousands of genetic markers. This wealth of data is rapidly expanding and allowing biological questions to be addressed with unprecedented scope and precision. The sizes of the data sets are now posing significant data processing and analysis challenges. Here we describe an extension of the Stacks software package to efficiently use genotype-by-sequencing data for studies of populations of organisms. Stacks now produces core population genomic summary statistics and SNP-by-SNP statistical tests. These statistics can be analysed across a reference genome using a smoothed sliding window. Stacks also now provides several output formats for several commonly used downstream analysis packages. The expanded population genomics functions in Stacks will make it a useful tool to harness the newest generation of massively parallel genotyping data for ecological and evolutionary genetics.",
"title": ""
},
{
"docid": "119ea9c1d6b2cf2063efaf4d5ed7e756",
"text": "In this paper, we use shape grammars (SGs) for facade parsing, which amounts to segmenting 2D building facades into balconies, walls, windows, and doors in an architecturally meaningful manner. The main thrust of our work is the introduction of reinforcement learning (RL) techniques to deal with the computational complexity of the problem. RL provides us with techniques such as Q-learning and state aggregation which we exploit to efficiently solve facade parsing. We initially phrase the 1D parsing problem in terms of a Markov Decision Process, paving the way for the application of RL-based tools. We then develop novel techniques for the 2D shape parsing problem that take into account the specificities of the facade parsing problem. Specifically, we use state aggregation to enforce the symmetry of facade floors and demonstrate how to use RL to exploit bottom-up, image-based guidance during optimization. We provide systematic results on the Paris building dataset and obtain state-of-the-art results in a fraction of the time required by previous methods. We validate our method under diverse imaging conditions and make our software and results available online.",
"title": ""
},
{
"docid": "0b17e1cbfa3452ba2ff7c00f4e137aef",
"text": "Brain-computer interfaces (BCIs) promise to provide a novel access channel for assistive technologies, including augmentative and alternative communication (AAC) systems, to people with severe speech and physical impairments (SSPI). Research on the subject has been accelerating significantly in the last decade and the research community took great strides toward making BCI-AAC a practical reality to individuals with SSPI. Nevertheless, the end goal has still not been reached and there is much work to be done to produce real-world-worthy systems that can be comfortably, conveniently, and reliably used by individuals with SSPI with help from their families and care givers who will need to maintain, setup, and debug the systems at home. This paper reviews reports in the BCI field that aim at AAC as the application domain with a consideration on both technical and clinical aspects.",
"title": ""
},
{
"docid": "a645f2b68ced60099d8ae93f79e1714a",
"text": "The purpose of this study was to examine the extent to which fundamental movement skills and physical fitness scores assessed in early adolescence predict self-reported physical activity assessed 6 years later. The sample comprised 333 (200 girls, 133 boys; M age = 12.41) students. The effects of previous physical activity, sex, and body mass index (BMI) were controlled in the main analyses. Adolescents' fundamental movement skills, physical fitness, self-report physical activity, and BMI were collected at baseline, and their self-report energy expenditure (metabolic equivalents: METs) and intensity of physical activity were collected using the International Physical Activity Questionnaire 6 years later. Results showed that fundamental movement skills predicted METs, light, moderate, and vigorous intensity physical activity levels, whereas fitness predicted METs, moderate, and vigorous physical activity levels. Hierarchical regression analyses also showed that after controlling for previous levels of physical activity, sex, and BMI, the size of the effect of fundamental movement skills and physical fitness on energy expenditure and physical activity intensity was moderate (R(2) change between 0.06 and 0.15), with the effect being stronger for high intensity physical activity.",
"title": ""
},
{
"docid": "3ed5a33db314d464973577c9a4442d33",
"text": "Augmented Reality (AR) was first demonstrated in the 1960s, but only recently have technologies emerged that can be used to easily deploy AR applications to many users. Cameraequipped cell phones with significant processing power and graphics abilities provide an inexpensive and versatile platform for AR applications, while the social networking technology of Web 2.0 provides a large-scale infrastructure for collaboratively producing and distributing geo-referenced AR content. This combination of widely used mobile hardware and Web 2.0 software allows the development of a new type of AR platform that can be used on a global scale. In this paper we describe the Augmented Reality 2.0 concept and present existing work on mobile AR and web technologies that could be used to create AR 2.0 applications.",
"title": ""
},
{
"docid": "2c7bafac9d4c4fedc43982bd53c99228",
"text": "One of the uniqueness of business is for firm to be customer focus. Study have shown that this could be achieved through blockchain technology in enhancing customer loyalty programs (Michael J. Casey 2015; John Ream et al 2016; Sean Dennis 2016; James O'Brien and Dave Montali, 2016; Peiguss 2012; Singh, Khan, 2012; and among others). Recent advances in block chain technology have provided the tools for marketing managers to create a new generation of being able to assess the level of control companies want to have over customer data and activities as well as security/privacy issues that always arise with every additional participant of the network While block chain technology is still in the early stages of adoption, it could prove valuable for loyalty rewards program providers. Hundreds of blockchain initiatives are already underway in various industries, particularly airline services, even though standardization is far from a reality. One attractive feature of loyalty rewards is that they are not core to business revenue and operations and companies willing to implement blockchain for customer loyalty programs benefit lower administrative costs, improved customer experiences, and increased user engagement (Michael J. Casey, 2015; James O'Brien and Dave Montali 2016; Peiguss 2012; Singh, Abstract: In today business world, companies have accelerated the use of Blockchain technology to enhance the brand recognition of their products and services. Company believes that the integration of Blockchain into the current business marketing strategy will enhance the growth of their products, and thus acting as a customer loyalty solution. The goal of this study is to obtain a deep understanding of the impact of blockchain technology in enhancing customer loyalty programs of airline business. To achieve the goal of the study, a contextualized and literature based research instrument was used to measure the application of the investigated “constructs”, and a survey was conducted to collect data from the sample population. A convenience sample of total (450) Questionnaires were distributed to customers, and managers of the surveyed airlines who could be reached by the researcher. 274 to airline customers/passengers, and the remaining 176 to managers in the various airlines researched. Questionnaires with instructions were hand-delivered to respondents. Out of the 397 completed questionnaires returned, 359 copies were found usable for the present study, resulting in an effective response rate of 79.7%. The respondents had different social, educational, and occupational backgrounds. The research instrument showed encouraging evidence of reliability and validity. Data were analyzed using descriptive statistics, percentages and ttest analysis. The findings clearly show that there is significant evidence that blockchain technology enhance customer loyalty programs of airline business. It was discovered that Usage of blockchain technology is emphasized by the surveyed airlines operators in Nigeria., the extent of effective usage of customer loyalty programs is related to blockchain technology, and that he level or extent of effective usage of blockchain technology does affect the achievement of customer loyalty program goals and objectives. Feedback from the research will assist to expand knowledge as to the usefulness of blockchain technology being a customer loyalty solution.",
"title": ""
},
{
"docid": "4c82a4e51633b87f2f6b2619ca238686",
"text": "Allocentric space is mapped by a widespread brain circuit of functionally specialized cell types located in interconnected subregions of the hippocampal-parahippocampal cortices. Little is known about the neural architectures required to express this variety of firing patterns. In rats, we found that one of the cell types, the grid cell, was abundant not only in medial entorhinal cortex (MEC), where it was first reported, but also in pre- and parasubiculum. The proportion of grid cells in pre- and parasubiculum was comparable to deep layers of MEC. The symmetry of the grid pattern and its relationship to the theta rhythm were weaker, especially in presubiculum. Pre- and parasubicular grid cells intermingled with head-direction cells and border cells, as in deep MEC layers. The characterization of a common pool of space-responsive cells in architecturally diverse subdivisions of parahippocampal cortex constrains the range of mechanisms that might give rise to their unique functional discharge phenotypes.",
"title": ""
},
{
"docid": "5bd713c468f48313e42b399f441bb709",
"text": "Nowadays, malware is affecting not only PCs but also mobile devices, which became pervasive in everyday life. Mobile devices can access and store personal information (e.g., location, photos, and messages) and thus are appealing to malware authors. One of the most promising approach to analyze malware is by monitoring its execution in a sandbox (i.e., via dynamic analysis). In particular, most malware sandboxing solutions for Android rely on an emulator, rather than a real device. This motivates malware authors to include runtime checks in order to detect whether the malware is running in a virtualized environment. In that case, the malicious app does not trigger the malicious payload. The presence of differences between real devices and Android emulators started an arms race between security researchers and malware authors, where the former want to hide these differences and the latter try to seek them out. In this paper we present Mirage, a malware sandbox architecture for Android focused on dynamic analysis evasion attacks. We designed the components of Mirage to be extensible via software modules, in order to build specific countermeasures against such attacks. To the best of our knowledge, Mirage is the first modular sandbox architecture that is robust against sandbox detection techniques. As a representative case study, we present a proof of concept implementation of Mirage with a module that tackles evasion attacks based on sensors API return values.",
"title": ""
},
{
"docid": "c1492f5eb2fafc52da81902a9d19d480",
"text": "A compact dual-band multiple-input-multiple-output (MIMO)/diversity antenna is proposed. This antenna is designed for 2.4/5.2/5.8GHz WLAN and 2.5/3.5/5.5 GHz WiMAX applications in portable mobile devices. It consists of two back-to-back monopole antennas connected with a T-shaped stub, where two rectangular slots are cut from the ground, which significantly reduces the mutual coupling between the two ports at the lower frequency band. The volume of this antenna is 40mm ∗ 30mm ∗ 1mm including the ground plane. Measured results show the isolation is better than −20 dB at the lower frequency band from 2.39 to 3.75GHz and −25 dB at the higher frequency band from 5.03 to 7 GHz, respectively. Moreover, acceptable radiation patterns, antenna gain, and envelope correlation coefficient are obtained. These characteristics indicate that the proposed antenna is suitable for some portable MIMO/diversity equipments.",
"title": ""
},
{
"docid": "c4d5464727db6deafc2ce2307284dd0c",
"text": "— Recently, many researchers have focused on building dual handed static gesture recognition systems. Single handed static gestures, however, pose more recognition complexity due to the high degree of shape ambiguities. This paper presents a gesture recognition setup capable of recognizing and emphasizing the most ambiguous static single handed gestures. Performance of the proposed scheme is tested on the alphabets of American Sign Language (ASL). Segmentation of hand contours from image background is carried out using two different strategies; skin color as detection cue with RGB and YCbCr color spaces, and thresholding of gray level intensities. A novel, rotation and size invariant, contour tracing descriptor is used to describe gesture contours generated by each segmentation technique. Performances of k-Nearest Neighbor (k-NN) and multiclass Support Vector Machine (SVM) classification techniques are evaluated to classify a particular gesture. Gray level segmented contour traces classified by multiclass SVM achieve accuracy up to 80.8% on the most ambiguous gestures of ASL alphabets with overall accuracy of 90.1%.",
"title": ""
},
{
"docid": "ec2257854faa3076b5c25d2c947d1780",
"text": "This paper presents a novel approach for road marking detection and classification based on machine learning algorithms. Road marking recognition is an important feature of an intelligent transportation system (ITS). Previous works are mostly developed using image processing and decisions are often made using empirical functions, which makes it difficult to be generalized. Hereby, we propose a general framework for object detection and classification, aimed at video-based intelligent transportation applications. It is a two-step approach. The detection is carried out using binarized normed gradient (BING) method. PCA network (PCANet) is employed for object classification. Both BING and PCANet are among the latest algorithms in the field of machine learning. Practically the proposed method is applied to a road marking dataset with 1,443 road images. We randomly choose 60% images for training and use the remaining 40% images for testing. Upon training, the system can detect 9 classes of road markings with an accuracy better than 96.8%. The proposed approach is readily applicable to other ITS applications.",
"title": ""
},
{
"docid": "4d26d3823e3889c22fe517857a49d508",
"text": "As an object moves through the field of view of a camera, the images of the object may change dramatically. This is not simply due to the translation of the object across the image plane. Rather, complications arise due to the fact that the object undergoes changes in pose relative to the viewing camera, changes in illumination relative to light sources, and may even become partially or fully occluded. In this paper, we develop an efficient, general framework for object tracking—one which addresses each of these complications. We first develop a computationally efficient method for handling the geometric distortions produced by changes in pose. We then combine geometry and illumination into an algorithm that tracks large image regions using no more computation than would be required to track with no accommodation for illumination changes. Finally, we augment these methods with techniques from robust statistics and treat occluded regions on the object as statistical outliers. Throughout, we present experimental results performed on live video sequences demonstrating the effectiveness and efficiency of our methods.",
"title": ""
}
] | scidocsrr |
c6befff453b219541b9d377794eca89d | Intelligent Traffic Information System Based on Integration of Internet of Things and Agent Technology | [
{
"docid": "6c8983865bf3d6bdbf120e0480345aac",
"text": "In the future Internet of Things (IoT), smart objects will be the fundamental building blocks for the creation of cyber-physical smart pervasive systems in a great variety of application domains ranging from health-care to transportation, from logistics to smart grid and cities. The implementation of a smart objects-oriented IoT is a complex challenge as distributed, autonomous, and heterogeneous IoT components at different levels of abstractions and granularity need to cooperate among themselves, with conventional networked IT infrastructures, and also with human users. In this paper, we propose the integration of two complementary mainstream paradigms for large-scale distributed computing: Agents and Cloud. Agent-based computing can support the development of decentralized, dynamic, cooperating and open IoT systems in terms of multi-agent systems. Cloud computing can enhance the IoT objects with high performance computing capabilities and huge storage resources. In particular, we introduce a cloud-assisted and agent-oriented IoT architecture that will be realized through ACOSO, an agent-oriented middleware for cooperating smart objects, and BodyCloud, a sensor-cloud infrastructure for large-scale sensor-based systems.",
"title": ""
}
] | [
{
"docid": "2915d67d630b31bc23d44b9eea0d039e",
"text": "Life-size humanoids which have the same joint arrangement as humans are expected to help in the living environment. In this case, they require high load operations such as gripping and conveyance of heavy load, and holding people at the care spot. However, these operations are difficult for existing humanoids because of their low joint output. Therefore, the purpose of this study is to develop the highoutput life-size humanoid robot. We first designed a motor driver for humanoid with featuring small, water-cooled, and high output, and it performed higher joint output than existing humanoids utilizing. In this paper, we describe designed humanoid arm and leg with this motor driver. The arm is featuring the designed 2-axis unit and the leg is featuring the water-cooled double motor system. We demonstrated the arm's high torque and high velocity experiment and the leg's high performance experiment based on water-cooled double motor compared with air-cooled and single motor. Then we designed and developed a life-size humanoid with these arms and legs. We demonstrated some humanoid's experiment operating high load to find out the arm and leg's validity.",
"title": ""
},
{
"docid": "ef1758847263c0708ed653c74a3cff41",
"text": "The management of central diabetes insipidus has been greatly simplified by the introduction of desmopressin (DDAVP). Its ease of administration, safety and tolerability make DDAVP the first line agent for outpatient treatment of central diabetes insipidus. The major complication of DDAVP therapy is water intoxication and hyponatremia. The risk of hyponatremia can be reduced by careful dose titration when initiating therapy and by close monitoring of serum osmolality when DDAVP is used with other medications affecting water balance. Herein we review the adverse effects of DDAVP and its predecessor, vasopressin, as well as discuss important clinical considerations when using these agents to treat central diabetes insipidus.",
"title": ""
},
{
"docid": "e70c6ccc129f602bd18a49d816ee02a9",
"text": "This purpose of this paper is to show how prevalent features of successful human tutoring interactions can be integrated into a pedagogical agent, AutoTutor. AutoTutor is a fully automated computer tutor that responds to learner input by simulating the dialog moves of effective, normal human tutors. AutoTutor’s delivery of dialog moves is organized within a 5step framework that is unique to normal human tutoring interactions. We assessed AutoTutor’s performance as an effective tutor and conversational partner during tutoring sessions with virtual students of varying ability levels. Results from three evaluation cycles indicate the following: (1) AutoTutor is capable of delivering pedagogically effective dialog moves that mimic the dialog move choices of human tutors, and (2) AutoTutor is a reasonably effective conversational partner. INTRODUCTION AND BACKGROUND Over the last decade a number of researchers have attempted to uncover the mechanisms of human tutoring that are responsible for student learning gains. Many of the informative findings have been reported in studies that have systematically analyzed the collaborative discourse that occurs between tutors and students (Fox, 1993; Graesser & Person, 1994; Graesser, Person, & Magliano, 1995; Hume, Michael, Rovick, & Evens, 1996; McArthur, Stasz, & Zmuidzinas, 1990; Merrill, Reiser, Ranney, & Trafton, 1992; Moore, 1995; Person & Graesser, 1999; Person, Graesser, Magliano, & Kreuz, 1994; Person, Kreuz, Zwaan, & Graesser, 1995; Putnam, 1987). For example, we have learned that the tutorial session is predominately controlled by the tutor. That is, tutors, not students, typically determine when and what topics will be covered in the session. Further, we know that human tutors rarely employ sophisticated or “ideal” tutoring models that are often incorporated into intelligent tutoring systems. Instead, human tutors are more likely to rely on localized strategies that are embedded within conversational turns. Although many findings such as these have illuminated the tutoring process, they present formidable challenges for designers of intelligent tutoring systems. After all, building a knowledgeable conversational partner is no small feat. However, if designers of future tutoring systems wish to capitalize on the knowledge gained from human tutoring studies, the next generation of tutoring systems will incorporate pedagogical agents that engage in learning dialogs with students. The purpose of this paper is twofold. First, we will describe how prevalent features of successful human tutoring interactions can be incorporated into a pedagogical agent, AutoTutor. Second, we will provide data from several preliminary performance evaluations in which AutoTutor interacts with virtual students of varying ability levels. Person, Graesser, Kreuz, Pomeroy, and the Tutoring Research Group AutoTutor is a fully automated computer tutor that is currently being developed by the Tutoring Research Group (TRG). AutoTutor is a working system that attempts to comprehend students’ natural language contributions and then respond to the student input by simulating the dialogue moves of human tutors. AutoTutor differs from other natural language tutors in several ways. First, AutoTutor does not restrict the natural language input of the student like other systems (e.g., Adele (Shaw, Johnson, & Ganeshan, 1999); the Ymir agents (Cassell & Thórisson, 1999); Cirscim-Tutor (Hume, Michael, Rovick, & Evens, 1996; Zhou et al., 1999); Atlas (Freedman, 1999); and Basic Electricity and Electronics (Moore, 1995; Rose, Di Eugenio, & Moore, 1999)). These systems tend to limit student input to a small subset of judiciously worded speech acts. Second, AutoTutor does not allow the user to substitute natural language contributions with GUI menu options like those in the Atlas and Adele systems. The third difference involves the open-world nature of AutoTutor’s content domain (i.e., computer literacy). The previously mentioned tutoring systems are relatively more closed-world in nature, and therefore, constrain the scope of student contributions. The current version of AutoTutor simulates the tutorial dialog moves of normal, untrained tutors; however, plans for subsequent versions include the integration of more sophisticated ideal tutoring strategies. AutoTutor is currently designed to assist college students learn about topics covered in an introductory computer literacy course. In a typical tutoring session with AutoTutor, students will learn the fundamentals of computer hardware, the operating system, and the Internet. A Brief Sketch of AutoTutor AutoTutor is an animated pedagogical agent that serves as a conversational partner with the student. AutoTutor’s interface is comprised of four features: a two-dimensional, talking head, a text box for typed student input, a text box that displays the problem/question being discussed, and a graphics box that displays pictures and animations that are related to the topic at hand. AutoTutor begins the session by introducing himself and then presents the student with a question or problem that is selected from a curriculum script. The question/problem remains in a text box at the top of the screen until AutoTutor moves on to the next topic. For some questions and problems, there are graphical displays and animations that appear in a specially designated box on the screen. Once AutoTutor has presented the student with a problem or question, a multi-turn tutorial dialog occurs between AutoTutor and the learner. All student contributions are typed into the keyboard and appear in a text box at the bottom of the screen. AutoTutor responds to each student contribution with one or a combination of pedagogically appropriate dialog moves. These dialog moves are conveyed via synthesized speech, appropriate intonation, facial expressions, and gestures and do not appear in text form on the screen. In the future, we hope to have AutoTutor handle speech recognition, so students can speak their contributions. However, current speech recognition packages require time-consuming training that is not optimal for systems that interact with multiple users. The various modules that enable AutoTutor to interact with the learner will be described in subsequent sections of the paper. For now, however, it is important to note that our initial goals for building AutoTutor have been achieved. That is, we have designed a computer tutor that participates in a conversation with the learner while simulating the dialog moves of normal human tutors. WHY SIMULATE NORMAL HUMAN TUTORS? It has been well documented that normal, untrained human tutors are effective. Effect sizes ranging between .5 and 2.3 have been reported in studies where student learning gains were measured (Bloom, 1984; Cohen, Kulik, & Kulik, 1982). For quite a while, these rather large effect sizes were somewhat puzzling. That is, normal tutors typically do not have expert domain knowledge nor do they have knowledge about sophisticated tutoring strategies. In order to gain a better understanding of the primary mechanisms that are responsible for student learning Simulating Human Tutor Dialog Moves in AutoTutor gains, a handful of researchers have systematically analyzed the dialogue that occurs between normal, untrained tutors and students (Graesser & Person, 1994; Graesser et al., 1995; Person & Graesser, 1999; Person et al., 1994; Person et al., 1995). Graesser, Person, and colleagues analyzed over 100 hours of tutoring interactions and identified two prominent features of human tutoring dialogs: (1) a five-step dialog frame that is unique to tutoring interactions, and (2) a set of tutor-initiated dialog moves that serve specific pedagogical functions. We believe these two features are responsible for the positive learning outcomes that occur in typical tutoring settings, and further, these features can be implemented in a tutoring system more easily than the sophisticated methods and strategies that have been advocated by other educational researchers and ITS developers. Five-step Dialog Frame The structure of human tutorial dialogs differs from learning dialogs that often occur in classrooms. Mehan (1979) and others have reported a 3-step pattern that is prevalent in classroom interactions. This pattern is often referred to as IRE, which stands for Initiation (a question or claim articulated by the teacher), Response (an answer or comment provided by the student), and Evaluation (teacher evaluates the student contribution). In tutoring, however, the dialog is managed by a 5-step dialog frame (Graesser & Person, 1994; Graesser et al., 1995). The five steps in this frame are presented below. Step 1: Tutor asks question (or presents problem). Step 2: Learner answers question (or begins to solve problem). Step 3: Tutor gives short immediate feedback on the quality of the answer (or solution). Step 4: Tutor and learner collaboratively improve the quality of the answer. Step 5: Tutor assesses learner’s understanding of the answer. This 5-step dialog frame in tutoring is a significant augmentation over the 3-step dialog frame in classrooms. We believe that the advantage of tutoring over classroom settings lies primarily in Step 4. Typically, Step 4 is a lengthy multi-turn dialog in which the tutor and student collaboratively contribute to the explanation that answers the question or solves the problem. At a macro-level, the dialog that occurs between AutoTutor and the learner conforms to Steps 1 through 4 of the 5-step frame. For example, at the beginning of each new topic, AutoTutor presents the learner with a problem or asks the learner a question (Step 1). The learner then attempts to solve the problem or answer the question (Step 2). Next, AutoTutor provides some type of short, evaluative feedback (Step 3). During Step 4, AutoTutor employs a variety of dialog moves (see next section) that encourage learner participation. Thus, ins",
"title": ""
},
{
"docid": "d428715497a2de16437a0b8f11fb69a0",
"text": "Fog or Edge computing has recently attracted broad attention from both industry and academia. It is deemed as a paradigm shift from the current centralized cloud computing model and could potentially bring a “Fog-IoT” architecture that would significantly benefit the future ubiquitous Internet of Things (IoT) systems and applications. However, it takes a series of key enabling technologies including emerging technologies to realize such a vision. In this article, we will survey these key enabling technologies with specific focuses on security and scalability, which are two very important and much-needed characteristics for future large-scale deployment. We aim to draw an overall big picture of the future for the research and development in these areas.",
"title": ""
},
{
"docid": "472ff656dc35c5ed37aae6e3a82e3192",
"text": "Status of This Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract JavaScript Object Notation (JSON) is a lightweight, text-based, language-independent data interchange format. It was derived from the ECMAScript Programming Language Standard. JSON defines a small set of formatting rules for the portable representation of structured data.",
"title": ""
},
{
"docid": "962831a1fa8771c68feb894dc2c63943",
"text": "San-Francisco in the US and Natal in Brazil are two coastal cities which are known rather for its tech scene and natural beauty than for its criminal activities. We analyze characteristics of the urban environment in these two cities, deploying a machine learning model to detect categories and hotspots of criminal activities. We propose an extensive set of spatio-temporal & urban features which can significantly improve the accuracy of machine learning models for these tasks, one of which achieved Top 1% performance on a Crime Classification Competition by kaggle.com. Extensive evaluation on several years of crime records from both cities show how some features — such as the street network — carry important information about criminal activities.",
"title": ""
},
{
"docid": "fa03a0640ada358378f1b4915aa68be2",
"text": "Recent evidence suggests that there are two possible systems for empathy: a basic emotional contagion system and a more advanced cognitive perspective-taking system. However, it is not clear whether these two systems are part of a single interacting empathy system or whether they are independent. Additionally, the neuroanatomical bases of these systems are largely unknown. In this study, we tested the hypothesis that emotional empathic abilities (involving the mirror neuron system) are distinct from those related to cognitive empathy and that the two depend on separate anatomical substrates. Subjects with lesions in the ventromedial prefrontal (VM) or inferior frontal gyrus (IFG) cortices and two control groups were assessed with measures of empathy that incorporate both cognitive and affective dimensions. The findings reveal a remarkable behavioural and anatomic double dissociation between deficits in cognitive empathy (VM) and emotional empathy (IFG). Furthermore, precise anatomical mapping of lesions revealed Brodmann area 44 to be critical for emotional empathy while areas 11 and 10 were found necessary for cognitive empathy. These findings are consistent with these cortices being different in terms of synaptic hierarchy and phylogenetic age. The pattern of empathy deficits among patients with VM and IFG lesions represents a first direct evidence of a double dissociation between emotional and cognitive empathy using the lesion method.",
"title": ""
},
{
"docid": "d4d802b296b210a1957b1a214d9fd9fb",
"text": "Many task domains require robots to interpret and act upon natural language commands which are given by people and which refer to the robot’s physical surroundings. Such interpretation is known variously as the symbol grounding problem (Harnad, 1990), grounded semantics (Feldman et al., 1996) and grounded language acquisition (Nenov and Dyer, 1993, 1994). This problem is challenging because people employ diverse vocabulary and grammar, and because robots have substantial uncertainty about the nature and contents of their surroundings, making it difficult to associate the constitutive language elements (principally noun phrases and spatial relations) of the command text to elements of those surroundings. Symbolic models capture linguistic structure but have not scaled successfully to handle the diverse language produced by untrained users. Existing statistical approaches can better handle diversity, but have not to date modeled complex linguistic structure, limiting achievable accuracy. Recent hybrid approaches have addressed limitations in scaling and complexity, but have not effectively associated linguistic and perceptual features. Our framework, called Generalized Grounding Graphs (G), addresses these issues by defining a probabilistic graphical model dynamically according to the linguistic parse structure of a natural language command. This approach scales effectively, handles linguistic diversity, and enables the system to associate parts of a command with the specific objects, places, and events in the external world to which they refer. We show that robots can learn word meanings and use those learned meanings to robustly follow natural language commands produced by untrained users. We demonstrate our approach for both mobility commands (e.g. route directions like “Go down the hallway through the door”) and mobile manipulation commands (e.g. physical directives like “Pick up the pallet on the truck”) involving a variety of semi-autonomous robotic platforms, including a wheelchair, a microair vehicle, a forklift, and the Willow Garage PR2. The first two authors contributed equally to this paper. 1 ar X iv :1 71 2. 01 09 7v 1 [ cs .C L ] 2 9 N ov 2 01 7",
"title": ""
},
{
"docid": "b941dc9133a12aad0a75d41112e91aa8",
"text": "Recurrent neural network grammars (RNNG) are a recently proposed probabilistic generative modeling family for natural language. They show state-ofthe-art language modeling and parsing performance. We investigate what information they learn, from a linguistic perspective, through various ablations to the model and the data, and by augmenting the model with an attention mechanism (GA-RNNG) to enable closer inspection. We find that explicit modeling of composition is crucial for achieving the best performance. Through the attention mechanism, we find that headedness plays a central role in phrasal representation (with the model’s latent attention largely agreeing with predictions made by hand-crafted head rules, albeit with some important differences). By training grammars without nonterminal labels, we find that phrasal representations depend minimally on nonterminals, providing support for the endocentricity hypothesis.",
"title": ""
},
{
"docid": "86ffd10b7f5f49f8e917be87cdbcb02d",
"text": "Audit logs are considered good practice for business systems, and are required by federal regulations for secure systems, drug approval data, medical information disclosure, financial records, and electronic voting. Given the central role of audit logs, it is critical that they are correct and inalterable. It is not sufficient to say, “our data is correct, because we store all interactions in a separate audit log.” The integrity of the audit log itself must also be guaranteed. This paper proposes mechanisms within a database management system (DBMS), based on cryptographically strong one-way hash functions, that prevent an intruder, including an auditor or an employee or even an unknown bug within the DBMS itself, from silently corrupting the audit log. We propose that the DBMS store additional information in the database to enable a separate audit log validator to examine the database along with this extra information and state conclusively whether the audit log has been compromised. We show with an implementation on a high-performance storage engine that the overhead for auditing is low and that the validator can efficiently and correctly determine if the audit log has been compromised.",
"title": ""
},
{
"docid": "1b9bcb2ab5bc0b2b2e475066a1f78fbe",
"text": "Fragility curves are becoming increasingly common components of flood risk assessments. This report introduces the concept of the fragility curve and shows how fragility curves are related to more familiar reliability concepts, such as the deterministic factor of safety and the relative reliability index. Examples of fragility curves are identified in the literature on structures and risk assessment to identify what methods have been used to develop fragility curves in practice. Four basic approaches are identified: judgmental, empirical, hybrid, and analytical. Analytical approaches are, by far, the most common method encountered in the literature. This group of methods is further decomposed based on whether the limit state equation is an explicit function or an implicit function and on whether the probability of failure is obtained using analytical solution methods or numerical solution methods. Advantages and disadvantages of the various approaches are considered. DISCLAIMER: The contents of this report are not to be used for advertising, publication, or promotional purposes. Citation of trade names does not constitute an official endorsement or approval of the use of such commercial products. All product names and trademarks cited are the property of their respective owners. The findings of this report are not to be construed as an official Department of the Army position unless so designated by other authorized documents. DESTROY THIS REPORT WHEN NO LONGER NEEDED. DO NOT RETURN IT TO THE ORIGINATOR.",
"title": ""
},
{
"docid": "a7cdfdefc87e899596579826dbb137a4",
"text": "Purpose\nThe purpose of this tutorial is to provide an overview of the benefits and challenges associated with the early identification of dyslexia.\n\n\nMethod\nThe literature on the early identification of dyslexia is reviewed. Theoretical arguments and research evidence are summarized. An overview of response to intervention as a method of early identification is provided, and the benefits and challenges associated with it are discussed. Finally, the role of speech-language pathologists in the early identification process is addressed.\n\n\nConclusions\nEarly identification of dyslexia is crucial to ensure that children are able to maximize their educational potential, and speech-language pathologists are well placed to play a role in this process. However, early identification alone is not sufficient-difficulties with reading may persist or become apparent later in schooling. Therefore, continuing progress monitoring and access to suitable intervention programs are essential.",
"title": ""
},
{
"docid": "ae8292c58a58928594d5f3730a6feacf",
"text": "Photoplethysmography (PPG) signals, captured using smart phones are generally noisy in nature. Although they have been successfully used to determine heart rate from frequency domain analysis, further indirect markers like blood pressure (BP) require time domain analysis for which the signal needs to be substantially cleaned. In this paper we propose a methodology to clean such noisy PPG signals. Apart from filtering, the proposed approach reduces the baseline drift of PPG signal to near zero. Furthermore it models each cycle of PPG signal as a sum of 2 Gaussian functions which is a novel contribution of the method. We show that, the noise cleaning effect produces better accuracy and consistency in estimating BP, compared to the state of the art method that uses the 2-element Windkessel model on features derived from raw PPG signal, captured from an Android phone.",
"title": ""
},
{
"docid": "8cdd54a8bd288692132b57cb889b2381",
"text": "This research deals with the soft computing methodology of fuzzy cognitive map (FCM). Here a mathematical description of FCM is presented and a new methodology based on fuzzy logic techniques for developing the FCM is examined. The capability and usefulness of FCM in modeling complex systems and the application of FCM to modeling and describing the behavior of a heat exchanger system is presented. The applicability of FCM to model the supervisor of complex systems is discussed and the FCM-supervisor for evaluating the performance of a system is constructed; simulation results are presented and discussed.",
"title": ""
},
{
"docid": "90d57d4b7fcd45c35e9e738a29badde7",
"text": "This paper deals with the problem of optimizing a factory floor layout in a Slovenian furniture factory. First, the current state of the manufacturing system is analyzed by constructing a discrete event simulation (DES) model that reflects the manufacturing processes. The company produces over 10,000 different products, and their manufacturing processes include approximately 30,000 subprocesses. Therefore, manually constructing a model to include every subprocess is not feasible. To overcome this problem, a method for automated model construction was developed to construct a DES model based on a selection of manufacturing orders and relevant subprocesses. The obtained simulation model provided insight into the manufacturing processes and enable easy modification of model parameters for optimizing the manufacturing processes. Finally, the optimization problem was solved: the total distance the products had to traverse between machines was minimized by devising an optimal machine layout. With the introduction of certain simplifications, the problem was best described as a quadratic assignment problem. A novel heuristic method based on force-directed graph drawing algorithms was developed. Optimizing the floor layout resulted in a significant reduction of total travel distance for the products.",
"title": ""
},
{
"docid": "d29485bc844995b639bb497fb05fcb6a",
"text": "Vol. LII (June 2015), 375–393 375 © 2015, American Marketing Association ISSN: 0022-2437 (print), 1547-7193 (electronic) *Paul R. Hoban is Assistant Professor of Marketing, Wisconsin School of Business, University of Wisconsin–Madison (e-mail: phoban@ bus. wisc. edu). Randolph E. Bucklin is Professor of Marketing, Peter W. Mullin Chair in Management, UCLA Anderson School of Management, University of California, Los Angeles (e-mail: randy.bucklin@anderson. ucla. edu). Avi Goldfarb served as associate editor for this article. PAUL R. HOBAN and RANDOLPH E. BUCKLIN*",
"title": ""
},
{
"docid": "79102cc14ce0d11b52b4288d2e52de10",
"text": "This paper presents a text detection method based on Extremal Regions (ERs) and Corner-HOG feature. Local Histogram of Oriented Gradient (HOG) extracted around corners (Corner-HOG) is used to effectively prune the non-text components in the component tree. Experimental results show that the Corner-HOG based pruning method can discard an average of 83.06% of all ERs in an image while preserving a recall of 90.51% of the text components. The remaining ERs are then grouped into text lines and candidate text lines are verified using black-white transition feature and the covariance descriptor of HOG. Experimental results on the 2011 Robust Reading Competition dataset show that the proposed text detection method provides promising performance.",
"title": ""
},
{
"docid": "d4d24bee47b97e1bf4aadad0f3993e78",
"text": "An aircraft landed safely is the result of a huge organizational effort required to cope with a complex system made up of humans, technology and the environment. The aviation safety record has improved dramatically over the years to reach an unprecedented low in terms of accidents per million take-offs, without ever achieving the “zero accident” target. The introduction of automation on board airplanes must be acknowledged as one of the driving forces behind the decline in the accident rate down to the current level.",
"title": ""
},
{
"docid": "e141a2e89edc2398c27a740a0bc885c0",
"text": "Modern information retrieval (IR) systems exhibit user dynamics through interactivity. These dynamic aspects of IR, including changes found in data, users, and systems, are increasingly being utilized in search engines. Session search is one such IR task—document retrieval within a session. During a session, a user constantly modifies queries to find documents that fulfill an information need. Existing IR techniques for assisting the user in this task are limited in their ability to optimize over changes, learn with a minimal computational footprint, and be responsive. This article proposes a novel query change retrieval model (QCM), which uses syntactic editing changes between consecutive queries, as well as the relationship between query changes and previously retrieved documents, to enhance session search. We propose modeling session search as a Markov decision process (MDP). We consider two agents in this MDP: the user agent and the search engine agent. The user agent’s actions are query changes that we observe, and the search engine agent’s actions are term weight adjustments as proposed in this work. We also investigate multiple query aggregation schemes and their effectiveness on session search. Experiments show that our approach is highly effective and outperforms top session search systems in TREC 2011 and TREC 2012.",
"title": ""
},
{
"docid": "412e10ae26c0abcb37379c6b37ea022a",
"text": "This paper presents the Gavagai Living Lexicon, which is an online distributional semantic model currently available in 20 different languages. We describe the underlying distributional semantic model, and how we have solved some of the challenges in applying such a model to large amounts of streaming data. We also describe the architecture of our implementation, and discuss how we deal with continuous quality assurance of the lexicon.",
"title": ""
}
] | scidocsrr |
b4ba615cd6e6c6f74f54b6d0cb2a5a50 | A wearable device for physical and emotional health monitoring | [
{
"docid": "6fc013e132bdd347f355c0b187cb5ca9",
"text": "Current wireless technologies, such as wireless body area networks and wireless personal area networks, provide promising applications in medical monitoring systems to measure specified physiological data and also provide location-based information, if required. With the increasing sophistication of wearable and implantable medical devices and their integration with wireless sensors, an ever-expanding range of therapeutic and diagnostic applications is being pursued by research and commercial organizations. This paper aims to provide a comprehensive review of recent developments in wireless sensor technology for monitoring behaviour related to human physiological responses. It presents background information on the use of wireless technology and sensors to develop a wireless physiological measurement system. A generic miniature platform and other available technologies for wireless sensors have been studied in terms of hardware and software structural requirements for a low-cost, low-power, non-invasive and unobtrusive system.",
"title": ""
}
] | [
{
"docid": "57eb8d5adbf8374710a3c40074fb38f8",
"text": "Information security and privacy in the healthcare sector is an issue of growing importance. The adoption of digital patient records, increased regulation, provider consolidation and the increasing need for information exchange between patients, providers and payers, all point towards the need for better information security. We critically survey the literature on information security and privacy in healthcare, published in information systems journals as well as many other related disciplines including health informatics, public health, law, medicine, the trade press and industry reports. In this paper, we provide a holistic view of the recent research and suggest new areas of interest to the information systems community.",
"title": ""
},
{
"docid": "ef7b6c2b0254535e9dbf85a4af596080",
"text": "African swine fever virus (ASFV) is a highly virulent swine pathogen that has spread across Eastern Europe since 2007 and for which there is no effective vaccine or treatment available. The dynamics of shedding and excretion is not well known for this currently circulating ASFV strain. Therefore, susceptible pigs were exposed to pigs intramuscularly infected with the Georgia 2007/1 ASFV strain to measure those dynamics through within- and between-pen transmission scenarios. Blood, oral, nasal and rectal fluid samples were tested for the presence of ASFV by virus titration (VT) and quantitative real-time polymerase chain reaction (qPCR). Serum was tested for the presence of ASFV-specific antibodies. Both intramuscular inoculation and contact transmission resulted in development of acute disease in all pigs although the experiments indicated that the pathogenesis of the disease might be different, depending on the route of infection. Infectious ASFV was first isolated in blood among the inoculated pigs by day 3, and then chronologically among the direct and indirect contact pigs, by day 10 and 13, respectively. Close to the onset of clinical signs, higher ASFV titres were found in blood compared with nasal and rectal fluid samples among all pigs. No infectious ASFV was isolated in oral fluid samples although ASFV genome copies were detected. Only one animal developed antibodies starting after 12 days post-inoculation. The results provide quantitative data on shedding and excretion of the Georgia 2007/1 ASFV strain among domestic pigs and suggest a limited potential of this isolate to cause persistent infection.",
"title": ""
},
{
"docid": "28823f624c037a8b54e9906c3b443f38",
"text": "Aging is associated with progressive losses in function across multiple systems, including sensation, cognition, memory, motor control, and affect. The traditional view has been that functional decline in aging is unavoidable because it is a direct consequence of brain machinery wearing down over time. In recent years, an alternative perspective has emerged, which elaborates on this traditional view of age-related functional decline. This new viewpoint--based upon decades of research in neuroscience, experimental psychology, and other related fields--argues that as people age, brain plasticity processes with negative consequences begin to dominate brain functioning. Four core factors--reduced schedules of brain activity, noisy processing, weakened neuromodulatory control, and negative learning--interact to create a self-reinforcing downward spiral of degraded brain function in older adults. This downward spiral might begin from reduced brain activity due to behavioral change, from a loss in brain function driven by aging brain machinery, or more likely from both. In aggregate, these interrelated factors promote plastic changes in the brain that result in age-related functional decline. This new viewpoint on the root causes of functional decline immediately suggests a remedial approach. Studies of adult brain plasticity have shown that substantial improvement in function and/or recovery from losses in sensation, cognition, memory, motor control, and affect should be possible, using appropriately designed behavioral training paradigms. Driving brain plasticity with positive outcomes requires engaging older adults in demanding sensory, cognitive, and motor activities on an intensive basis, in a behavioral context designed to re-engage and strengthen the neuromodulatory systems that control learning in adults, with the goal of increasing the fidelity, reliability, and power of cortical representations. Such a training program would serve a substantial unmet need in aging adults. Current treatments directed at age-related functional losses are limited in important ways. Pharmacological therapies can target only a limited number of the many changes believed to underlie functional decline. Behavioral approaches focus on teaching specific strategies to aid higher order cognitive functions, and do not usually aspire to fundamentally change brain function. A brain-plasticity-based training program would potentially be applicable to all aging adults with the promise of improving their operational capabilities. We have constructed such a brain-plasticity-based training program and conducted an initial randomized controlled pilot study to evaluate the feasibility of its use by older adults. A main objective of this initial study was to estimate the effect size on standardized neuropsychological measures of memory. We found that older adults could learn the training program quickly, and could use it entirely unsupervised for the majority of the time required. Pre- and posttesting documented a significant improvement in memory within the training group (effect size 0.41, p<0.0005), with no significant within-group changes in a time-matched computer using active control group, or in a no-contact control group. Thus, a brain-plasticity-based intervention targeting normal age-related cognitive decline may potentially offer benefit to a broad population of older adults.",
"title": ""
},
{
"docid": "503ccd79172e5b8b3cc3a26cf0d1b485",
"text": "The field-of-view of standard cameras is very small, which is one of the main reasons that contextual information is not as useful as it should be for object detection. To overcome this limitation, we advocate the use of 360◦ full-view panoramas in scene understanding, and propose a whole-room context model in 3D. For an input panorama, our method outputs 3D bounding boxes of the room and all major objects inside, together with their semantic categories. Our method generates 3D hypotheses based on contextual constraints and ranks the hypotheses holistically, combining both bottom-up and top-down context information. To train our model, we construct an annotated panorama dataset and reconstruct the 3D model from single-view using manual annotation. Experiments show that solely based on 3D context without any image-based object detector, we can achieve a comparable performance with the state-of-the-art object detector. This demonstrates that when the FOV is large, context is as powerful as object appearance. All data and source code are available online.",
"title": ""
},
{
"docid": "9f1d881193369f1b7417d71a9a62bc19",
"text": "Neurofeedback (NFB) is a potential alternative treatment for children with ADHD that aims to optimize brain activity. Whereas most studies into NFB have investigated behavioral effects, less attention has been paid to the effects on neurocognitive functioning. The present randomized controlled trial (RCT) compared neurocognitive effects of NFB to (1) optimally titrated methylphenidate (MPH) and (2) a semi-active control intervention, physical activity (PA), to control for non-specific effects. Using a multicentre three-way parallel group RCT design, children with ADHD, aged 7–13, were randomly allocated to NFB (n = 39), MPH (n = 36) or PA (n = 37) over a period of 10–12 weeks. NFB comprised theta/beta training at CZ. The PA intervention was matched in frequency and duration to NFB. MPH was titrated using a double-blind placebo controlled procedure to determine the optimal dose. Neurocognitive functioning was assessed using parameters derived from the auditory oddball-, stop-signal- and visual spatial working memory task. Data collection took place between September 2010 and March 2014. Intention-to-treat analyses showed improved attention for MPH compared to NFB and PA, as reflected by decreased response speed during the oddball task [η p 2 = 0.21, p < 0.001], as well as improved inhibition, impulsivity and attention, as reflected by faster stop signal reaction times, lower commission and omission error rates during the stop-signal task (range η p 2 = 0.09–0.18, p values <0.008). Working memory improved over time, irrespective of received treatment (η p 2 = 0.17, p < 0.001). Overall, stimulant medication showed superior effects over NFB to improve neurocognitive functioning. Hence, the findings do not support theta/beta training applied as a stand-alone treatment in children with ADHD.",
"title": ""
},
{
"docid": "faec1a6b42cfdd303309c69c4185c9fe",
"text": "The currency which is imitated with illegal sanction of state and government is counterfeit currency. Every country incorporates a number of security features for its currency security. Currency counterfeiting is always been a challenging term for financial system of any country. The problem of counterfeiting majorly affects the economical as well as financial growth of a country. In view of the problem various studies about counterfeit detection has been conducted using various techniques and variety of tools. This paper focuses on the researches and studies that have been conducted by various researchers. The paper highlighted the methodologies used and the particular characteristics features considered for counterfeit money detection.",
"title": ""
},
{
"docid": "b907741ee0918dcbc2c2e42d106e35a4",
"text": "This paper investigates decoding of low-density parity-check (LDPC) codes over the binary erasure channel (BEC). We study the iterative and maximum-likelihood (ML) decoding of LDPC codes on this channel. We derive bounds on the ML decoding of LDPC codes on the BEC. We then present an improved decoding algorithm. The proposed algorithm has almost the same complexity as the standard iterative decoding. However, it has better performance. Simulations show that we can decrease the error rate by several orders of magnitude using the proposed algorithm. We also provide some graph-theoretic properties of different decoding algorithms of LDPC codes over the BEC which we think are useful to better understand the LDPC decoding methods, in particular, for finite-length codes.",
"title": ""
},
{
"docid": "0c8947cbaa2226a024bf3c93541dcae1",
"text": "As storage systems grow in size and complexity, they are increasingly confronted with concurrent disk failures together with multiple unrecoverable sector errors. To ensure high data reliability and availability, erasure codes with high fault tolerance are required. In this article, we present a new family of erasure codes with high fault tolerance, named GRID codes. They are called such because they are a family of strip-based codes whose strips are arranged into multi-dimensional grids. In the construction of GRID codes, we first introduce a concept of matched codes and then discuss how to use matched codes to construct GRID codes. In addition, we propose an iterative reconstruction algorithm for GRID codes. We also discuss some important features of GRID codes. Finally, we compare GRID codes with several categories of existing codes. Our comparisons show that for large-scale storage systems, our GRID codes have attractive advantages over many existing erasure codes: (a) They are completely XOR-based and have very regular structures, ensuring easy implementation; (b) they can provide up to 15 and even higher fault tolerance; and (c) their storage efficiency can reach up to 80% and even higher. All the advantages make GRID codes more suitable for large-scale storage systems.",
"title": ""
},
{
"docid": "e3c41b4fc2bcb71872d1d18339e1498c",
"text": "Visual Question Answering (VQA) has received a lot of attention over the past couple of years. A number of deep learning models have been proposed for this task. However, it has been shown [1–4] that these models are heavily driven by superficial correlations in the training data and lack compositionality – the ability to answer questions about unseen compositions of seen concepts. This compositionality is desirable and central to intelligence. In this paper, we propose a new setting for Visual Question Answering where the test question-answer pairs are compositionally novel compared to training question-answer pairs. To facilitate developing models under this setting, we present a new compositional split of the VQA v1.0 [5] dataset, which we call Compositional VQA (C-VQA). We analyze the distribution of questions and answers in the C-VQA splits. Finally, we evaluate several existing VQA models under this new setting and show that the performances of these models degrade by a significant amount compared to the original VQA setting.",
"title": ""
},
{
"docid": "3f88c453eab8b2fbfffbf98fee34d086",
"text": "Face recognition become one of the most important and fastest growing area during the last several years and become the most successful application of image analysis and broadly used in security system. It has been a challenging, interesting, and fast growing area in real time applications. The propose method is tested using a benchmark ORL database that contains 400 images of 40 persons. Pre-Processing technique are applied on the ORL database to increase the recognition rate. The best recognition rate is 97.5% when tested using 9 training images and 1 testing image. Increasing image database brightness is efficient and will increase the recognition rate. Resizing images using 0.3 scale is also efficient and will increase the recognition rate. PCA is used for feature extraction and dimension reduction. Euclidean distance is used for matching process.",
"title": ""
},
{
"docid": "0a3d649baf7483245167979fbbb008d2",
"text": "Students participate more in a classroom and also report a better understanding of course concepts when steps are taken to actively engage them. The Student Engagement (SE) Survey was developed and used in this study for measuring student engagement at the class level and consisted of 14 questions adapted from the original National Survey of Student Engagement (NSSE) survey. The adapted survey examined levels of student engagement in 56 classes at an upper mid-western university in the USA. Campus-wide faculty members participated in a program for training them in innovative teaching methods including problem-based learning (PBL). Results of this study typically showed a higher engagement in higher-level classes and also those classes with fewer students. In addition, the level of engagement was typically higher in those classrooms with more PBL.",
"title": ""
},
{
"docid": "87e2d691570403ae36e0a9a87099ad71",
"text": "Audiovisual translation is one of several overlapping umbrella terms that include ‘media translation’, ‘multimedia translation’, ‘multimodal translation’ and ‘screen translation’. These different terms all set out to cover the interlingual transfer of verbal language when it is transmitted and accessed both visually and acoustically, usually, but not necessarily, through some kind of electronic device. Theatrical plays and opera, for example, are clearly audiovisual yet, until recently, audiences required no technological devices to access their translations; actors and singers simply acted and sang the translated versions. Nowadays, however, opera is frequently performed in the original language with surtitles in the target language projected on to the stage. Furthermore, electronic librettos placed on the back of each seat containing translations are now becoming widely available. However, to date most research in audiovisual translation has been dedicated to the field of screen translation, which, while being both audiovisual and multimedial in nature, is specifically understood to refer to the translation of films and other products for cinema, TV, video and DVD. After the introduction of the first talking pictures in the 1920s a solution needed to be found to allow films to circulate despite language barriers. How to translate film dialogues and make movie-going accessible to speakers of all languages was to become a major concern for both North American and European film directors. Today, of course, screens are no longer restricted to cinema theatres alone. Television screens, computer screens and a series of devices such as DVD players, video game consoles, GPS navigation devices and mobile phones are also able to send out audiovisual products to be translated into scores of languages. Hence, strictly speaking, screen translation includes translations for any electronic appliance with a screen; however, for the purposes of this chapter, the term will be used mainly to refer to translations for the most popular products, namely for cinema, TV, video and DVD, and videogames. The two most widespread modalities adopted for translating products for the screen are dubbing and subtitling.1 Dubbing is a process which uses the acoustic channel for translational purposes, while subtitling is visual and involves a written translation that is superimposed on to the",
"title": ""
},
{
"docid": "346fe809a65e28ccdf717752144843d6",
"text": "The continuous increase in quantity and depth of regulation following the financial crisis has left the financial industry in dire need of making its compliance assessment activities more effective. The field of AI & Law provides models that, despite being fit for the representation of semantics of requirements, do not share the approach favoured by the industry which relies on business vocabularies such as SBVR. This paper presents Mercury, a solution for representing the requirements and vocabulary contained in a regulatory text (or business policy) in a SME-friendly way, for the purpose of determining compliance. Mercury includes a structured language based on SBVR, with a rulebook, containing the regulative and constitutive rules, and a vocabulary, containing the actions and factors that determine a rule’s applicability and its legal effect. Mercury includes an XML persistence model and is mapped to an OWL ontology called FIRO, enabling semantic applications.",
"title": ""
},
{
"docid": "1ffe0a1612214af88315a5a751d3bb4f",
"text": "In recent years, it is getting attention for renewable energy sources such as solar energy, fuel cells, batteries or ultracapacitors for distributed power generation systems. This paper proposes a general mathematical model of solar cells and Matlab/Simulink software based simulation of this model has been visually programmed. Proposed model can be used with other hybrid systems to develop solar cell simulations. Also, all equations are performed by using Matlab/Simulink programming.",
"title": ""
},
{
"docid": "e5f3a4d3e1fd591b81da2c08b228ce47",
"text": "This article is a tutorial for researchers who are designing software to perform a creative task and want to evaluate their system using interdisciplinary theories of creativity. Researchers who study human creativity have a great deal to offer computational creativity. We summarize perspectives from psychology, philosophy, cognitive science, and computer science as to how creativity can be measured both in humans and in computers. We survey how these perspectives have been used in computational creativity research and make recommendations for how they should be used.",
"title": ""
},
{
"docid": "3df57ba5139950ec58785ed669094d26",
"text": "In this paper we present the model used by the team Rivercorners for the 2017 RepEval shared task. First, our model separately encodes a pair of sentences into variable-length representations by using a bidirectional LSTM. Later, it creates fixed-length raw representations by means of simple aggregation functions, which are then refined using an attention mechanism. Finally it combines the refined representations of both sentences into a single vector to be used for classification. With this model we obtained test accuracies of 72.057% and 72.055% in the matched and mismatched evaluation tracks respectively, outperforming the LSTM baseline, and obtaining performances similar to a model that relies on shared information between sentences (ESIM). When using an ensemble both accuracies increased to 72.247% and 72.827% respectively.",
"title": ""
},
{
"docid": "29eebb40973bdfac9d1f1941d4c7c889",
"text": "This paper explains a procedure for getting models of robot kinematics and dynamics that are appropriate for robot control design. The procedure consists of the following steps: 1) derivation of robot kinematic and dynamic models and establishing correctness of their structures; 2) experimental estimation of the model parameters; 3) model validation; and 4) identification of the remaining robot dynamics, not covered with the derived model. We give particular attention to the design of identification experiments and to online reconstruction of state coordinates, as these strongly influence the quality of the estimation process. The importance of correct friction modeling and the estimation of friction parameters are illuminated. The models of robot kinematics and dynamics can be used in model-based nonlinear control. The remaining dynamics cannot be ignored if high-performance robot operation with adequate robustness is required. The complete procedure is demonstrated for a direct-drive robotic arm with three rotational joints.",
"title": ""
},
{
"docid": "830a585529981bd5b61ac5af3055d933",
"text": "Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Glaucoma is one of the most common causes of blindness. The manual examination of optic disk (OD) is a standard procedure used for detecting glaucoma. In this paper, we present an automatic OD parameterization technique based on segmented OD and cup regions obtained from monocular retinal images. A novel OD segmentation method is proposed which integrates the local image information around each point of interest in multidimensional feature space to provide robustness against variations found in and around the OD region. We also propose a novel cup segmentation method which is based on anatomical evidence such as vessel bends at the cup boundary, considered relevant by glaucoma experts. Bends in a vessel are robustly detected using a region of support concept, which automatically selects the right scale for analysis. A multi-stage strategy is employed to derive a reliable subset of vessel bends called r-bends followed by a local spline fitting to derive the desired cup boundary. The method has been evaluated on 138 images comprising 33 normal and 105 glaucomatous images against three glaucoma experts. The obtained segmentation results show consistency in handling various geometric and photometric variations found across the dataset. The estimation error of the method for vertical cup-to-disk diameter ratio is 0.09/0.08 (mean/standard deviation) while for cup-to-disk area ratio it is 0.12/0.10. Overall, the obtained qualitative and quantitative results show effectiveness in both segmentation and subsequent OD parameterization for glaucoma assessment.",
"title": ""
},
{
"docid": "c0a51f27931d8314b73a7de969bdfb08",
"text": "Organizations need practical security benchmarking tools in order to plan effective security strategies. This paper explores a number of techniques that can be used to measure security within an organization. It proposes a benchmarking methodology that produces results that are of strategic importance to both decision makers and technology implementers.",
"title": ""
},
{
"docid": "e96eaf2bde8bf50605b67fb1184b760b",
"text": "In response to your recent publication comparing subjective effects of D9-tetrahydrocannabinol and herbal cannabis (Wachtel et al. 2002), a number of comments are necessary. The first concerns the suitability of the chosen “marijuana” to assay the issues at hand. NIDA cannabis has been previously characterized in a number of studies (Chait and Pierri 1989; Russo et al. 2002), as a crude lowgrade product (2–4% THC) containing leaves, stems and seeds, often 3 or more years old after processing, with a stale odor lacking in terpenoids. This contrasts with the more customary clinical cannabis employed by patients in Europe and North America, composed solely of unseeded flowering tops with a potency of up to 20% THC. Cannabis-based medicine extracts (CBME) (Whittle et al. 2001), employed in clinical trials in the UK (Notcutt 2002; Robson et al. 2002), are extracted from flowering tops with abundant glandular trichomes, and retain full terpenoid and flavonoid components. In the study at issue (Wachtel et al. 2002), we are informed that marijuana contained 2.11% THC, 0.30% cannabinol (CBN), and 0.05% (CBD). The concentration of the latter two cannabinoids is virtually inconsequential. Thus, we are not surprised that no differences were seen between NIDA marijuana with essentially only one cannabinoid, and pure, synthetic THC. In comparison, clinical grade cannabis and CBME customarily contain high quantities of CBD, frequently equaling the percentage of THC (Whittle et al. 2001). Carlini et al. (1974) determined that cannabis extracts produced effects “two or four times greater than that expected from their THC content, based on animal and human studies”. Similarly, Fairbairn and Pickens (1981) detected the presence of unidentified “powerful synergists” in cannabis extracts, causing 330% greater activity in mice than THC alone. The clinical contribution of other CBD and other cannabinoids, terpenoids and flavonoids to clinical cannabis effects has been espoused as an “entourage effect” (Mechoulam and Ben-Shabat 1999), and is reviewed in detail by McPartland and Russo (2001). Briefly summarized, CBD has anti-anxiety effects (Zuardi et al. 1982), anti-psychotic benefits (Zuardi et al. 1995), modulates metabolism of THC by blocking its conversion to the more psychoactive 11-hydroxy-THC (Bornheim and Grillo 1998), prevents glutamate excitotoxicity, serves as a powerful anti-oxidant (Hampson et al. 2000), and has notable anti-inflammatory and immunomodulatory effects (Malfait et al. 2000). Terpenoid cannabis components probably also contribute significantly to clinical effects of cannabis and boil at comparable temperatures to THC (McPartland and Russo 2001). Cannabis essential oil demonstrates serotonin receptor binding (Russo et al. 2000). Its terpenoids include myrcene, a potent analgesic (Rao et al. 1990) and anti-inflammatory (Lorenzetti et al. 1991), betacaryophyllene, another anti-inflammatory (Basile et al. 1988) and gastric cytoprotective (Tambe et al. 1996), limonene, a potent inhalation antidepressant and immune stimulator (Komori et al. 1995) and anti-carcinogenic (Crowell 1999), and alpha-pinene, an anti-inflammatory (Gil et al. 1989) and bronchodilator (Falk et al. 1990). Are these terpenoid effects significant? A dried sample of drug-strain cannabis buds was measured as displaying an essential oil yield of 0.8% (Ross and ElSohly 1996), or a putative 8 mg per 1000 mg cigarette. Buchbauer et al. (1993) demonstrated that 20–50 mg of essential oil in the ambient air in mouse cages produced measurable changes in behavior, serum levels, and bound to cortical cells. Similarly, Komori et al. (1995) employed a gel of citrus fragrance with limonene to produce a significant antidepressant benefit in humans, obviating the need for continued standard medication in some patients, and also improving CD4/8 immunologic ratios. These data would E. B. Russo ()) Montana Neurobehavioral Specialists, 900 North Orange Street, Missoula, MT, 59802 USA e-mail: erusso@blackfoot.net",
"title": ""
}
] | scidocsrr |
5b64d5546765f7ad18ec9b4bda17a71f | Investigation of friction characteristics of a tendon driven wearable robotic hand | [
{
"docid": "030b25a7c93ca38dec71b301843c7366",
"text": "Simple grippers with one or two degrees of freedom are commercially available prosthetic hands; these pinch type devices cannot grasp small cylinders and spheres because of their small degree of freedom. This paper presents the design and prototyping of underactuated five-finger prosthetic hand for grasping various objects in daily life. Underactuated mechanism enables the prosthetic hand to move fifteen compliant joints only by one ultrasonic motor. The innovative design of this prosthetic hand is the underactuated mechanism optimized to distribute grasping force like those of humans who can grasp various objects robustly. Thanks to human like force distribution, the prototype of prosthetic hand could grasp various objects in daily life and heavy objects with the maximum ejection force of 50 N that is greater than other underactuated prosthetic hands.",
"title": ""
},
{
"docid": "720eccb945faa357bc44c5aa33fe60a9",
"text": "The evolution of an arm exoskeleton design for treating shoulder pathology is examined. Tradeoffs between various kinematics configurations are explored, and a device with five active degrees of freedom is proposed. Two rapid-prototype designs were built and fitted to several subjects to verify the kinematic design and determine passive link adjustments. Control modes are developed for exercise therapy and functional rehabilitation, and a distributed software architecture that incorporates computer safety monitoring is described. Although intended primarily for therapy, the exoskeleton is also used to monitor progress in strength, range of motion, and functional task performance",
"title": ""
}
] | [
{
"docid": "fb2ff96dbfe584f450dd19f8d3cea980",
"text": "[1] Nondestructive imaging methods such as X-ray computed tomography (CT) yield high-resolution, three-dimensional representations of pore space and fluid distribution within porous materials. Steadily increasing computational capabilities and easier access to X-ray CT facilities have contributed to a recent surge in microporous media research with objectives ranging from theoretical aspects of fluid and interfacial dynamics at the pore scale to practical applications such as dense nonaqueous phase liquid transport and dissolution. In recent years, significant efforts and resources have been devoted to improve CT technology, microscale analysis, and fluid dynamics simulations. However, the development of adequate image segmentation methods for conversion of gray scale CT volumes into a discrete form that permits quantitative characterization of pore space features and subsequent modeling of liquid distribution and flow processes seems to lag. In this paper we investigated the applicability of various thresholding and locally adaptive segmentation techniques for industrial and synchrotron X-ray CT images of natural and artificial porous media. A comparison between directly measured and image-derived porosities clearly demonstrates that the application of different segmentation methods as well as associated operator biases yield vastly differing results. This illustrates the importance of the segmentation step for quantitative pore space analysis and fluid dynamics modeling. Only a few of the tested methods showed promise for both industrial and synchrotron tomography. Utilization of local image information such as spatial correlation as well as the application of locally adaptive techniques yielded significantly better results.",
"title": ""
},
{
"docid": "79a2cc561cd449d8abb51c162eb8933d",
"text": "We introduce a new test of how well language models capture meaning in children’s books. Unlike standard language modelling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lowerfrequency words, which carry greater semantic content. We compare a range of state-of-the-art models, each with a different way of encoding what has been previously read. We show that models which store explicit representations of long-term contexts outperform state-of-the-art neural language models at predicting semantic content words, although this advantage is not observed for syntactic function words. Interestingly, we find that the amount of text encoded in a single memory representation is highly influential to the performance: there is a sweet-spot, not too big and not too small, between single words and full sentences that allows the most meaningful information in a text to be effectively retained and recalled. Further, the attention over such window-based memories can be trained effectively through self-supervision. We then assess the generality of this principle by applying it to the CNN QA benchmark, which involves identifying named entities in paraphrased summaries of news articles, and achieve state-of-the-art performance.",
"title": ""
},
{
"docid": "7f1eb105b7a435993767e4a4b40f7ed9",
"text": "In the last two decades, organizations have recognized, indeed fixated upon, the impOrtance of quality and quality management One manifestation of this is the emergence of the total quality management (TQM) movement, which has been proclaimed as the latest and optimal way of managing organizations. Likewise, in the domain of human resource management, the concept of quality of work life (QWL) has also received much attention of late from theoreticians, researchers, and practitioners. However, little has been done to build a bridge between these two increasingly important concepts, QWL and TQM. The purpose of this research is to empirically examine the relationship between quality of work life (the internalized attitudes employees' have about their jobs) and an indicatorofTQM, customer service attitudes, CSA (the externalized signals employees' send to customers about their jobs). In addition, this study examines how job involvement and organizational commitment mediate the relationship between QWL and CSA. OWL and <:sA HlU.3 doc JJ a9t94 page 3 INTRODUCTION Quality and quality management have become increasingly important topics for both practitioners and researchers (Anderson, Rungtusanatham, & Schroeder, 1994). Among the many quality related activities that have arisen, the principle of total quality mana~ement (TQM) has been advanced as the optimal approach for managing people and processes. Indeed, it is considered by some to be the key to ensuring the long-term viability of organizations (Feigenbaum, 1982). Ofcourse, niany companies have invested heavily in total quality efforts in the form of capital expenditures on plant and equipment, and through various human resource management programs designed to spread the quality gospel. However, many still argue that there is insufficient theoretical development and empirical eviden~e for the determinants and consequences of quality management initiatives (Dean & Bowen, 1994). Mter reviewing the relevant research literatures, we find that three problems persist in the research on TQM. First, a definition of quality has not been agreed upon. Even more problematic is the fact that many of the definitions that do exist are continuously evolving. Not smprisingly, these variable definitions often lead to inconsistent and even conflicting conclusions, Second, very few studies have systematically examined these factors that influence: the quality of goods and services, the implementation of quality activities, or the performance of organizations subsequent to undertaking quality initiatives (Spencer, 1994). Certainly this has been true for quality-related human resource management interventions. Last, TQM has suffered from an \"implementation problem\" (Reger, Gustafson, Demarie, & Mullane, 1994, p. 565) which has prevented it from transitioning from the theoretical to the applied. In the domain of human resource management, quality of working life (QWL) has also received a fair amount of attention of late from theorists, researchers, and practitioners. The underlying, and mostimportant, principles of QWL capture an employee's satisfaction with and feelings about their: work, work environment, and organization. Most who study QWL, and TQM for that matter, tend to focus on the importance of employee systems and organizational performance, whereas researchers in the field ofHRM OWLmdCSA HlU.3doc 1J1l2f}4 pBgc4 usually emphasize individual attitudes and individual performance (Walden, 1994). Fmthennore, as Walden (1994) alludes to, there are significantly different managerial prescriptions and applied levels for routine human resource management processes, such as selection, performance appraisal, and compensation, than there are for TQM-driven processes, like teamwork, participative management, and shared decision-making (Deming, 1986, 1993; Juran, 1989; M. Walton, 1986; Dean & Bowen, 1994). To reiterate, these variations are attributable to the difference between a mico focus on employees as opposed to a more macrofocus on employee systems. These specific differences are but a few of the instances where the views of TQM and the views of traditional HRM are not aligned (Cardy & Dobbins, 1993). In summary, although TQM is a ubiquitous organizational phenomenon; it has been given little research attention, especially in the form ofempirical studies. Therefore, the goal of this study is to provide an empirical assessment of how one, internalized, indicator ofHRM effectiveness, QWL, is associated with one, externalized, indicator of TQM, customer service attitudes, CSA. In doing so, it bridges the gap between \"employee-focused\" H.RM outcoines and \"customer-focused\" TQM consequences. In addition, it examines the mediating effects of organizational commitment and job involvement on this relationship. QUALITY OF WORK LIFE AND CUSTOMER SERVICE AITITUDES In this section, we introduce and review the main principles of customer service attitudes, CSA, and discuss its measurement Thereafter, our extended conceptualization and measurement of QWL will be presented. Fmally, two variables hypothesized to function as mediators of the relationship between CSA and QWL, organization commitment and job involvement, will be· explored. Customer Service Attitudes (CSA) Despite all the ruminations about it in the business and trade press, TQM still remains an ambiguous notion, one that often gives rise to as many different definitions as there are observers. Some focus on the presence of organizational systems. Others, the importance of leadership. ., Many stress the need to reduce variation in organizational processes (Deming, 1986). A number · OWL and CSA mn.3 doc 11 fl9tlJ4 page 5 emphasize reducing costs through q~ty improvement (p.B. Crosby, 1979). Still others focus on quality planing, control, and improvement (Juran, 1989). Regardless of these differences, however, the most important, generally agreed upon principle is to be \"customer focused\" (Feigenbaum, 1982). The cornerstone for this principle is the belief that customer satisfaction and customer judgments about the organization and itsproducts are the most important determinants of long-term organizational viability (Oliva, Oliver & MacMillan, 1992). Not surprisingly, this belief is a prominent tenet in both the manufacturing and service sectors alike. Conventional wisdom holds that quality can best be evaluated from the customers' perspective. Certainly, customers can easily articulate how well a product or service meets their expectations. Therefore, managers and researchers must take into account subjective and cognitive factors that influence customers' judgments when trying to identify influential customer cues, rather than just relying on organizational presumptions. Recently, for example, Hannon & Sano (1994) described how customer-driven HR strategies and practices are pervasive in Japan. An example they cited was the practice of making the tOp graduates from the best schools work in low level, customer service jobs for their first 1-2 years so that they might better underst3nd customers and their needs. To be sure, defining quality in terms of whether a product or service meets the expectations ofcustomers is all-encompassing. As a result of the breadth of this issue, and the limited research on this topic, many importantquestions about the service relationship, particularly those penaining to exchanges between employees and customers, linger. Some include, \"What are the key dimensions of service quality?\" and \"What are the actions service employees might direct their efforts to in order to foster good relationships with customers?\" Arguably, the most readily obvious manifestations of quality for any customer are the service attitudes ofemployees. In fact, dming the employee-customer interaction, conventional wisdom holds that employees' customer service attitudes influence customer satisfaction, customer evaluations, and decisions to buy. . OWL and <:SA HJU.3,doc J J129m page 6 According to Rosander (1980), there are five dimensions of service quality: quality of employee performance, facility, data, decision, and outcome. Undoubtedly, the performance of the employee influences customer satisfaction. This phenomenon has been referred to as interactive quality (Lehtinen & Lehtinen, 1982). Parasuraman, Zeithaml, & Berry (1985) go so far as to suggest that service quality is ultimately a function of the relationship between the employee and the customer, not the product or the price. Sasser, Olsen, & Wyckoff (1987) echo the assertion that personnel performance is a critical factor in the satisfaction of customers. If all of them are right, the relationship between satisfaction with quality of work life and customer service attitudes cannot be understated. Measuring Customer Service Attitudes The challenge of measuring service quality has increasingly captured the attention of researchers (Teas, 1994; Cronin & Taylor, 1992). While the substance and determinants of quality may remain undefined, its importance to organizations is unquestionable. Nevertheless, numerous problems inherent in the measurement of customer service attitudes still exist (Reeves & Bednar, 1994). Perhaps the complexities involved in measuring this construct have deterred many researchers from attempting to define and model service quality. Maybe this is also the reason why many of the efforts to define and measure service quality have emanated primarily from manufacturing, rather than service, settings. When it has been measured, quality has sometimes been defined as a \"zero defect\" policy, a perspective the Japanese have embraced. Alternatively, P.B. Crosby (1979) quantifies quality as \"conformance to requirements.\" Garvin (1983; 1988), on the other hand, measures quality in terms ofcounting the incidence of \"internal failures\" and \"external failures.\" Other definitions include \"value\" (Abbot, 1955; Feigenbaum, 1982), \"concordance to specification'\" (Gilmo",
"title": ""
},
{
"docid": "83187228617d62fb37f99cf107c7602a",
"text": "A very important class of spatial queries consists of nearestneighbor (NN) query and its variations. Many studies in the past decade utilize R-trees as their underlying index structures to address NN queries efficiently. The general approach is to use R-tree in two phases. First, R-tree’s hierarchical structure is used to quickly arrive to the neighborhood of the result set. Second, the R-tree nodes intersecting with the local neighborhood (Search Region) of an initial answer are investigated to find all the members of the result set. While R-trees are very efficient for the first phase, they usually result in the unnecessary investigation of many nodes that none or only a small subset of their including points belongs to the actual result set. On the other hand, several recent studies showed that the Voronoi diagrams are extremely efficient in exploring an NN search region, while due to lack of an efficient access method, their arrival to this region is slow. In this paper, we propose a new index structure, termed VoR-Tree that incorporates Voronoi diagrams into R-tree, benefiting from the best of both worlds. The coarse granule rectangle nodes of R-tree enable us to get to the search region in logarithmic time while the fine granule polygons of Voronoi diagram allow us to efficiently tile or cover the region and find the result. Utilizing VoR-Tree, we propose efficient algorithms for various Nearest Neighbor queries, and show that our algorithms have better I/O complexity than their best competitors.",
"title": ""
},
{
"docid": "90e6a1fa70ddec11248ba658623d2d6e",
"text": "This paper proposes a new technique for grid synchronization under unbalanced and distorted conditions, i.e., the dual second order generalised integrator - frequency-locked loop (DSOGI-FLL). This grid synchronization system results from the application of the instantaneous symmetrical components method on the stationary and orthogonal alphabeta reference frame. The second order generalized integrator concept (SOGI) is exploited to generate in-quadrature signals used on the alphabeta reference frame. The frequency-adaptive characteristic is achieved by a simple control loop, without using either phase-angles or trigonometric functions. In this paper, the development of the DSOGI-FLL is plainly exposed and hypothesis and conclusions are verified by simulation and experimental results",
"title": ""
},
{
"docid": "026408a6ad888ea0bcf298a23ef77177",
"text": "The microwave power transmission is an approach for wireless power transmission. As an important component of a microwave wireless power transmission systems, microwave rectennas are widely studied. A rectenna based on a microstrip dipole antenna and a microwave rectifier with high conversion efficiency were designed at 2.45 GHz. The dipole antenna achieved a gain of 5.2 dBi, a return loss greater than 10 dB, and a bandwidth of 20%. The microwave to DC (MW-DC) conversion efficiency of the rectifier was measured as 83% with 20 dBm input power and 600 Ω load. There are 72 rectennas to form an array with an area of 50 cm by 50 cm. The measured results show that the arrangement of the rectenna connection is an effective way to improve the total conversion efficiency, when the microwave power distribution is not uniform on rectenna array. The experimental results show that the highest microwave power transmission efficiency reaches 67.6%.",
"title": ""
},
{
"docid": "a0f20c2481aefc3b431f708ade0cc1aa",
"text": "Objective Video game violence has become a highly politicized issue for scientists and the general public. There is continuing concern that playing violent video games may increase the risk of aggression in players. Less often discussed is the possibility that playing violent video games may promote certain positive developments, particularly related to visuospatial cognition. The objective of the current article was to conduct a meta-analytic review of studies that examine the impact of violent video games on both aggressive behavior and visuospatial cognition in order to understand the full impact of such games. Methods A detailed literature search was used to identify peer-reviewed articles addressing violent video game effects. Effect sizes r (a common measure of effect size based on the correlational coefficient) were calculated for all included studies. Effect sizes were adjusted for observed publication bias. Results Results indicated that publication bias was a problem for studies of both aggressive behavior and visuospatial cognition. Once corrected for publication bias, studies of video game violence provided no support for the hypothesis that violent video game playing is associated with higher aggression. However playing violent video games remained related to higher visuospatial cognition (r x = 0.36). Conclusions Results from the current analysis did not support the conclusion that violent video game playing leads to aggressive behavior. However, violent video game playing was associated with higher visuospatial cognition. It may be advisable to reframe the violent video game debate in reference to potential costs and benefits of this medium.",
"title": ""
},
{
"docid": "bf8f46e4c85f7e45879cee4282444f78",
"text": "Influence of culture conditions such as light, temperature and C/N ratio was studied on growth of Haematococcus pluvialis and astaxanthin production. Light had significant effect on astaxanthin production and it varied with its intensity and direction of illumination and effective culture ratio (ECR, volume of culture medium/volume of flask). A 6-fold increase in astaxanthin production (37 mg/L) was achieved with 5.1468·107 erg·m−2·s−1 light intensity (high light, HL) at effective culture ratio of 0.13 compared to that at 0.52 ECR, while the difference in the astaxanthin production was less than 2 — fold between the effective culture ratios at 1.6175·107 erg·m−2·s−1 light intensity (low light, LL). Multidirectional (three-directional) light illumination considerably enhanced the astaxanthin production (4-fold) compared to unidirectional illumination. Cell count was high at low temperature (25 °C) while astaxanthin content was high at 35 °C in both autotrophic and heterotrophic media. In a heterotrophic medium at low C/N ratio H. pluvialis growth was higher with prolonged vegetative phase, while high C/N ratio favoured early encystment and higher astaxanthin formation.",
"title": ""
},
{
"docid": "5a85c72c5b9898b010f047ee99dba133",
"text": "A method to design arbitrary three-way power dividers with ultra-wideband performance is presented. The proposed devices utilize a broadside-coupled structure, which has three coupled layers. The method assumes general asymmetric coupled layers. The design approach exploits the three fundamental modes of propagation: even-even, odd-odd, and odd-even, and the conformal mapping technique to find the coupling factors between the different layers. The method is used to design 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 three-way power dividers. The designed devices feature a multilayer broadside-coupled microstrip-slot-microstrip configuration using elliptical-shaped structures. The developed power dividers have a compact size with an overall dimension of 20 mm 30 mm. The simulated and measured results of the manufactured devices show an insertion loss equal to the nominated value 1 dB. The return loss for the input/output ports of the devices is better than 17, 18, and 13 dB, whereas the isolation between the output ports is better than 17, 14, and 15 dB for the 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 dividers, respectively, across the 3.1-10.6-GHz band.",
"title": ""
},
{
"docid": "4645d0d7b1dfae80657f75d3751ef72a",
"text": "Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results.",
"title": ""
},
{
"docid": "6e05c3e76e87317db05c43a1f564724a",
"text": "Data science or \"data-driven research\" is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves.",
"title": ""
},
{
"docid": "9db779a5a77ac483bb1991060dca7c28",
"text": "An Ambient Intelligence (AmI) environment is primary developed using intelligent agents and wireless sensor networks. The intelligent agents could automatically obtain contextual information in real time using Near Field Communication (NFC) technique and wireless ad-hoc networks. In this research, we propose a stock trading and recommendation system with mobile devices (Android platform) interface in the over-the-counter market (OTC) environments. The proposed system could obtain the real-time financial information of stock price through a multi-agent architecture with plenty of useful features. In addition, NFC is used to achieve a context-aware environment allowing for automatic acquisition and transmission of useful trading recommendations and relevant stock information for investors. Finally, AmI techniques are applied to successfully create smart investment spaces, providing investors with useful monitoring tools and investment recommendation.",
"title": ""
},
{
"docid": "cbfdea54abb1e4c1234ca44ca6913220",
"text": "Seeds of chickpea (Cicer arietinum L.) were exposed in batches to static magnetic fields of strength from 0 to 250 mT in steps of 50 mT for 1-4 h in steps of 1 h for all fields. Results showed that magnetic field application enhanced seed performance in terms of laboratory germination, speed of germination, seedling length and seedling dry weight significantly compared to unexposed control. However, the response varied with field strength and duration of exposure without any particular trend. Among the various combinations of field strength and duration, 50 mT for 2 h, 100 mT for 1 h and 150 mT for 2 h exposures gave best results. Exposure of seeds to these three magnetic fields improved seed coat membrane integrity as it reduced the electrical conductivity of seed leachate. In soil, seeds exposed to these three treatments produced significantly increased seedling dry weights of 1-month-old plants. The root characteristics of the plants showed dramatic increase in root length, root surface area and root volume. The improved functional root parameters suggest that magnetically treated chickpea seeds may perform better under rainfed (un-irrigated) conditions where there is a restrictive soil moisture regime.",
"title": ""
},
{
"docid": "55b95e06bdf28ebd0b6a1e39875635e2",
"text": "As the security landscape evolves over time, where thousands of species of malicious codes are seen every day, antivirus vendors strive to detect and classify malware families for efficient and effective responses against malware campaigns. To enrich this effort and by capitalizing on ideas from the social network analysis domain, we build a tool that can help classify malware families using features driven from the graph structure of their system calls. To achieve that, we first construct a system call graph that consists of system calls found in the execution of the individual malware families. To explore distinguishing features of various malware species, we study social network properties as applied to the call graph, including the degree distribution, degree centrality, average distance, clustering coefficient, network density, and component ratio. We utilize features driven from those properties to build a classifier for malware families. Our experimental results show that “influence-based” graph metrics such as the degree centrality are effective for classifying malware, whereas the general structural metrics of malware are less effective for classifying malware. Our experiments demonstrate that the proposed system performs well in detecting and classifying malware families within each malware class with accuracy greater than 96%.",
"title": ""
},
{
"docid": "26f2b200bf22006ab54051c9288420e8",
"text": "Emotion keyword spotting approach can detect emotion well for explicit emotional contents while it obviously cannot compare to supervised learning approaches for detecting emotional contents of particular events. In this paper, we target earthquake situations in Japan as the particular events for emotion analysis because the affected people often show their states and emotions towards the situations via social networking sites. Additionally, tracking crowd emotions in the Internet during the earthquakes can help authorities to quickly decide appropriate assistance policies without paying the cost as the traditional public surveys. Our three main contributions in this paper are: a) the appropriate choice of emotions; b) the novel proposal of two classification methods for determining the earthquake related tweets and automatically identifying the emotions in Twitter; c) tracking crowd emotions during different earthquake situations, a completely new application of emotion analysis research. Our main analysis results show that Twitter users show their Fear and Anxiety right after the earthquakes occurred while Calm and Unpleasantness are not showed clearly during the small earthquakes but in the large tremor.",
"title": ""
},
{
"docid": "417eff5fd6251c70790d69e2b8dae255",
"text": "This paper is a report on the initial trial for its kind in the development of the performance index of the autonomous mobile cleaning robot. The unique characteristic features of the cleaning robot have been identified as autonomous mobility, dust collection, and operation noise. Along with the identification of the performance indices the standardized performance-evaluation methods including the corresponding performance evaluation platform for each indices have been developed as well. The validity of the proposed performance evaluation methods has been demonstrated by applying the proposed evaluation methods on two commercial cleaning robots available in market. The proposed performance evaluation methods can be applied to general-purpose autonomous service robots which will be introduced in the consumer market in near future.",
"title": ""
},
{
"docid": "0f9d6fcd53560c0c0433d64014f2aeb2",
"text": "The task of plagiarism detection entails two main steps, suspicious candidate retrieval and pairwise document similarity analysis also called detailed analysis. In this paper we focus on the second subtask. We will report our monolingual plagiarism detection system which is used to process the Persian plagiarism corpus for the task of pairwise document similarity. To retrieve plagiarised passages a plagiarism detection method based on vector space model, insensitive to context reordering, is presented. We evaluate the performance in terms of precision, recall, granularity and plagdet metrics.",
"title": ""
},
{
"docid": "fa851a3828bf6ebf371c49917bab3b4e",
"text": "Recent research has documented large di!erences among countries in ownership concentration in publicly traded \"rms, in the breadth and depth of capital markets, in dividend policies, and in the access of \"rms to external \"nance. A common element to the explanations of these di!erences is how well investors, both shareholders and creditors, are protected by law from expropriation by the managers and controlling shareholders of \"rms. We describe the di!erences in laws and the e!ectiveness of their enforcement across countries, discuss the possible origins of these di!erences, summarize their consequences, and assess potential strategies of corporate governance reform. We argue that the legal approach is a more fruitful way to understand corporate governance and its reform than the conventional distinction between bank-centered and market-centered \"nancial systems. ( 2000 Elsevier Science S.A. All rights reserved. JEL classixcation: G21; G28; G32",
"title": ""
},
{
"docid": "9655259173f749134723f98585a254c1",
"text": "With the rapid growth of streaming media applications, there has been a strong demand of Quality-of-Experience (QoE) measurement and QoE-driven video delivery technologies. While the new worldwide standard dynamic adaptive streaming over hypertext transfer protocol (DASH) provides an inter-operable solution to overcome the volatile network conditions, its complex characteristic brings new challenges to the objective video QoE measurement models. How streaming activities such as stalling and bitrate switching events affect QoE is still an open question, and is hardly taken into consideration in the traditionally QoE models. More importantly, with an increasing number of objective QoE models proposed, it is important to evaluate the performance of these algorithms in a comparative setting and analyze the strengths and weaknesses of these methods. In this study, we build two subject-rated streaming video databases. The progressive streaming video database is dedicated to investigate the human responses to the combined effect of video compression, initial buffering, and stalling. The adaptive streaming video database is designed to evaluate the performance of adaptive bitrate streaming algorithms and objective QoE models. We also provide useful insights on the improvement of adaptive bitrate streaming algorithms. Furthermore, we propose a novel QoE prediction approach to account for the instantaneous quality degradation due to perceptual video presentation impairment, the playback stalling events, and the instantaneous interactions between them. Twelve QoE algorithms from four categories including signal fidelity-based, network QoS-based, application QoSbased, and hybrid QoE models are assessed in terms of correlation with human perception",
"title": ""
}
] | scidocsrr |
7fbe1e066bf607663234d89602f0666e | A multi-case study on Industry 4.0 for SME's in Brandenburg, Germany | [
{
"docid": "1857eb0d2d592961bd7c1c2f226df616",
"text": "The increasing integration of the Internet of Everything into the industrial value chain has built the foundation for the next industrial revolution called Industrie 4.0. Although Industrie 4.0 is currently a top priority for many companies, research centers, and universities, a generally accepted understanding of the term does not exist. As a result, discussing the topic on an academic level is difficult, and so is implementing Industrie 4.0 scenarios. Based on a quantitative text analysis and a qualitative literature review, the paper identifies design principles of Industrie 4.0. Taking into account these principles, academics may be enabled to further investigate on the topic, while practitioners may find assistance in identifying appropriate scenarios. A case study illustrates how the identified design principles support practitioners in identifying Industrie 4.0 scenarios.",
"title": ""
}
] | [
{
"docid": "7ddc7a3fffc582f7eee1d0c29914ba1a",
"text": "Cyclic neutropenia is an uncommon hematologic disorder characterized by a marked decrease in the number of neutrophils in the peripheral blood occurring at regular intervals. The neutropenic phase is characteristically associated with clinical symptoms such as recurrent fever, malaise, headaches, anorexia, pharyngitis, ulcers of the oral mucous membrane, and gingival inflammation. This case report describes a Japanese girl who has this disease and suffers from periodontitis and oral ulceration. Her case has been followed up for the past 5 years from age 7 to 12. The importance of regular oral hygiene, careful removal of subgingival plaque and calculus, and periodic and thorough professional mechanical tooth cleaning was emphasized to arrest the progress of periodontal breakdown. Local antibiotic application with minocycline ointment in periodontal pockets was beneficial as an ancillary treatment, especially during neutropenic periods.",
"title": ""
},
{
"docid": "d94f4df63ac621d9a8dec1c22b720abb",
"text": "Automatically selecting an appropriate set of materialized views and indexes for SQL databases is a non-trivial task. A judicious choice must be cost-driven and influenced by the workload experienced by the system. Although there has been work in materialized view selection in the context of multidimensional (OLAP) databases, no past work has looked at the problem of building an industry-strength tool for automated selection of materialized views and indexes for SQL workloads. In this paper, we present an end-to-end solution to the problem of selecting materialized views and indexes. We describe results of extensive experimental evaluation that demonstrate the effectiveness of our techniques. Our solution is implemented as part of a tuning wizard that ships with Microsoft SQL Server 2000.",
"title": ""
},
{
"docid": "95bb07e57d9bd2b7e9a9a59c29806b66",
"text": "Breast cancer is one of the most common cancers and the second most responsible for cancer mortality worldwide. In 2014, in Portugal approximately 27,200 people died of cancer, of which 1,791 were women with breast cancer. Flaxseed has been one of the most studied foods, regarding possible relations to breast cancer, though mainly in experimental studies in animals, yet in few clinical trials. It is rich in omega-3 fatty acids, α-linolenic acid, lignan, and fibers. One of the main components of flaxseed is the lignans, of which 95% are made of the predominant secoisolariciresinol diglucoside (SDG). SDG is converted into enterolactone and enterodiol, both with antiestrogen activity and structurally similar to estrogen; they can bind to cell receptors, decreasing cell growth. Some studies have shown that the intake of omega-3 fatty acids is related to the reduction of breast cancer risk. In animal studies, α-linolenic acids have been shown to be able to suppress growth, size, and proliferation of cancer cells and also to promote breast cancer cell death. Other animal studies found that the intake of flaxseed combined with tamoxifen can reduce tumor size to a greater extent than taking tamoxifen alone. Additionally, some clinical trials showed that flaxseed can have an important role in decreasing breast cancer risk, mainly in postmenopausal women. Further studies are needed, specifically clinical trials that may demonstrate the potential benefits of flaxseed in breast cancer.",
"title": ""
},
{
"docid": "c12d27988e70e9b3e6987ca2f0ca8bca",
"text": "In this tutorial, we introduce the basic theory behind Stega nography and Steganalysis, and present some recent algorithms and devel opm nts of these fields. We show how the existing techniques used nowadays are relate d to Image Processing and Computer Vision, point out several trendy applicati ons of Steganography and Steganalysis, and list a few great research opportunities j ust waiting to be addressed.",
"title": ""
},
{
"docid": "ea596b23af4b34fdb6a9986a03730d99",
"text": "In the past few years, recommender systems and semantic web technologies have become main subjects of interest in the research community. In this paper, we present a domain independent semantic similarity measure that can be used in the recommendation process. This semantic similarity is based on the relations between the individuals of an ontology. The assessment can be done offline which allows time to be saved and then, get real-time recommendations. The measure has been experimented on two different domains: movies and research papers. Moreover, the generated recommendations by the semantic similarity have been evaluated by a set of volunteers and the results have been promising.",
"title": ""
},
{
"docid": "0a981597279b2fb1792b5d1a00f0c9ec",
"text": "With billions of people using smartphones and the exponential growth of smartphone apps, it is prohibitive for app marketplaces, such as Google App Store, to thoroughly verify if an app is legitimate or malicious. As a result, mobile users are left to decide for themselves whether an app is safe to use. Even worse, recent studies have shown that over 70% of apps in markets request to collect data irrelevant to the main functions of the apps, which could cause leaking of private information or inefficient use of mobile resources. It is worth mentioning that since resource management mechanism of mobile devices is different from PC machines, existing security solutions in PC malware area are not quite compatible with mobile devices. Therefore, academic researchers and commercial anti-malware companies have proposed many security mechanisms to address the security issues of the Android devices. Considering the mechanisms and techniques which are different in nature and used in proposed works, they can be classified into different categories. In this survey, we discuss the existing Android security threats and existing security enforcements solutions between 2010−2015 and try to classify works and review their functionalities. We review a few works of each class. The survey also reviews the strength and weak points of the solutions.",
"title": ""
},
{
"docid": "5bfc5768cf41643a870e3f3dddbbd741",
"text": "Homomorphic encryption has progressed rapidly in both efficiency and versatility since its emergence in 2009. Meanwhile, a multitude of pressing privacy needs — ranging from cloud computing to healthcare management to the handling of shared databases such as those containing genomics data — call for immediate solutions that apply fully homomorpic encryption (FHE) and somewhat homomorphic encryption (SHE) technologies. Further progress towards these ends requires new ideas for the efficient implementation of algebraic operations on word-based (as opposed to bit-wise) encrypted data. Whereas handling data encrypted at the bit level leads to prohibitively slow algorithms for the arithmetic operations that are essential for cloud computing, the word-based approach hits its bottleneck when operations such as integer comparison are needed. In this work, we tackle this challenging problem, proposing solutions to problems — including comparison and division — in word-based encryption via a leveled FHE scheme. We present concrete performance figures for all proposed primitives.",
"title": ""
},
{
"docid": "ec5095df6250a8f6cdf088f730dfbd5e",
"text": "Canine atopic dermatitis (CAD) is a multifaceted disease associated with exposure to various offending agents such as environmental and food allergens. The diagnosis of this condition is difficult because none of the typical signs are pathognomonic. Sets of criteria have been proposed but are mainly used to include dogs in clinical studies. The goals of the present study were to characterize the clinical features and signs of a large population of dogs with CAD, to identify which of these characteristics could be different in food-induced atopic dermatitis (FIAD) and non-food-induced atopic dermatitis (NFIAD) and to develop criteria for the diagnosis of this condition. Using simulated annealing, selected criteria were tested on a large and geographically widespread population of pruritic dogs. The study first described the signalment, history and clinical features of a large population of CAD dogs, compared FIAD and NFIAD dogs and confirmed that both conditions are clinically indistinguishable. Correlations of numerous clinical features with the diagnosis of CAD are subsequently calculated, and two sets of criteria associated with sensitivity and specificity ranging from 80% to 85% and from 79% to 85%, respectively, are proposed. It is finally demonstrated that these new sets of criteria provide better sensitivity and specificity, when compared to Willemse and Prélaud criteria. These criteria can be applied to both FIAD and NFIAD dogs.",
"title": ""
},
{
"docid": "31add593ce5597c24666d9662b3db89d",
"text": "Estimating the body shape and posture of a dressed human subject in motion represented as a sequence of (possibly incomplete) 3D meshes is important for virtual change rooms and security. To solve this problem, statistical shape spaces encoding human body shape and posture variations are commonly used to constrain the search space for the shape estimate. In this work, we propose a novel method that uses a posture-invariant shape space to model body shape variation combined with a skeleton-based deformation to model posture variation. Our method can estimate the body shape and posture of both static scans and motion sequences of dressed human body scans. In case of motion sequences, our method takes advantage of motion cues to solve for a single body shape estimate along with a sequence of posture estimates. We apply our approach to both static scans and motion sequences and demonstrate that using our method, higher fitting accuracy is achieved than when using a variant of the popular SCAPE model [2, 18] as statistical model.",
"title": ""
},
{
"docid": "ef6160d304908ea87287f2071dea5f6d",
"text": "The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95% can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89%, on compressed data.",
"title": ""
},
{
"docid": "eb6675c6a37aa6839fa16fe5d5220cfb",
"text": "In this paper, we propose an efficient method to detect the underlying structures in data. The same as RANSAC, we randomly sample MSSs (minimal size samples) and generate hypotheses. Instead of analyzing each hypothesis separately, the consensus information in all hypotheses is naturally fused into a hypergraph, called random consensus graph, with real structures corresponding to its dense subgraphs. The sampling process is essentially a progressive refinement procedure of the random consensus graph. Due to the huge number of hyperedges, it is generally inefficient to detect dense subgraphs on random consensus graphs. To overcome this issue, we construct a pairwise graph which approximately retains the dense subgraphs of the random consensus graph. The underlying structures are then revealed by detecting the dense subgraphs of the pair-wise graph. Since our method fuses information from all hypotheses, it can robustly detect structures even under a small number of MSSs. The graph framework enables our method to simultaneously discover multiple structures. Besides, our method is very efficient, and scales well for large scale problems. Extensive experiments illustrate the superiority of our proposed method over previous approaches, achieving several orders of magnitude speedup along with satisfactory accuracy and robustness.",
"title": ""
},
{
"docid": "bd1a13c94d0e12b4ba9f14fef47d2564",
"text": "Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f = u+ η, and η is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle’s projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation. Source Code ANSI C source code to produce the same results as the demo is accessible at the IPOL web page of this article1.",
"title": ""
},
{
"docid": "8c46f24d8e710c5fb4e25be76fc5b060",
"text": "This paper presents the novel design of a wideband circularly polarized (CP) Radio Frequency Identification (RFID) reader microstrip patch antenna for worldwide Ultra High Frequency (UHF) band which covers 840–960 MHz. The proposed antenna, which consists of a microstrip patch with truncated corners and a cross slot, is placed on a foam substrate (εr = 1.06) above a ground plane and is fed through vias through ground plane holes that extend from the quadrature 3 dB branch line hybrid coupler placed below the ground plane. This helps to separate feed network radiation, from the patch antenna and keeping the CP purity. The prototype antenna was fabricated with a total size of 225 × 250 × 12.8 mm3 which shows a measured impedance matching band of 840–1150MHz (31.2%) as well as measured rotating linear based circularly polarized radiation patterns. The simulated and measured 3 dB Axial Ratio (AR) bandwidth is better than 23% from 840–1050 MHz meeting and exceeding the target worldwide RFID UHF band.",
"title": ""
},
{
"docid": "701be9375bb7c019710f7887a0074d15",
"text": "A blockchain powered health information exchange (HIE) can unlock the true value of interoperability and cyber security. This system has the potential to eliminate the friction and costs of current third party intermediaries, when considering population health management. There are promises of improved data integrity, reduced transaction costs, decentralization and disintermediation of trust. Being able to coordinate patient care via a blockchain HIE essentially alleviates unnecessary services and duplicate tests with lowering costs and improvements in efficiencies of the continuum care cycle, while adhering to all HIPAA rules and standards. A patient-centered protocol supported by blockchain technology, Patientory is changing the way healthcare stakeholders manage electronic medical data and interact with clinical care teams.",
"title": ""
},
{
"docid": "3647b5e0185c0120500fff8061265abd",
"text": "Human and machine visual sensing is enhanced when surface properties of objects in scenes, including color, can be reliably estimated despite changes in the ambient lighting conditions. We describe a computational method for estimating surface spectral reflectance when the spectral power distribution of the ambient light is not known.",
"title": ""
},
{
"docid": "dc42ffc3d9a5833f285bac114e8a8b37",
"text": "In this paper, we present a recursive algorithm for extracting classification rules from feedforward neural networks (NNs) that have been trained on data sets having both discrete and continuous attributes. The novelty of this algorithm lies in the conditions of the extracted rules: the rule conditions involving discrete attributes are disjoint from those involving continuous attributes. The algorithm starts by first generating rules with discrete attributes only to explain the classification process of the NN. If the accuracy of a rule with only discrete attributes is not satisfactory, the algorithm refines this rule by recursively generating more rules with discrete attributes not already present in the rule condition, or by generating a hyperplane involving only the continuous attributes. We show that for three real-life credit scoring data sets, the algorithm generates rules that are not only more accurate but also more comprehensible than those generated by other NN rule extraction methods.",
"title": ""
},
{
"docid": "062839e72c6bdc6c6bf2ba1d1041d07b",
"text": "Students’ increasing use of text messaging language has prompted concern that textisms (e.g., 2 for to, dont for don’t, ☺) will intrude into their formal written work. Eighty-six Australian and 150 Canadian undergraduates were asked to rate the appropriateness of textism use in various situations. Students distinguished between the appropriateness of using textisms in different writing modalities and to different recipients, rating textism use as inappropriate in formal exams and assignments, but appropriate in text messages, online chat and emails with friends and siblings. In a second study, we checked the examination papers of a separate sample of 153 Australian undergraduates for the presence of textisms. Only a negligible number were found. We conclude that, overall, university students recognise the different requirements of different recipients and modalities when considering textism use and that students are able to avoid textism use in exams despite media reports to the contrary.",
"title": ""
},
{
"docid": "a458f16b84f40dc0906658a93d4b2efa",
"text": "We investigated the usefulness of Sonazoid contrast-enhanced ultrasonography (Sonazoid-CEUS) in the diagnosis of hepatocellular carcinoma (HCC). The examination was performed by comparing the images during the Kupffer phase of Sonazoid-CEUS with superparamagnetic iron oxide magnetic resonance (SPIO-MRI). The subjects were 48 HCC nodules which were histologically diagnosed (well-differentiated HCC, n = 13; moderately differentiated HCC, n = 30; poorly differentiated HCC, n = 5). We performed Sonazoid-CEUS and SPIO-MRI on all subjects. In the Kupffer phase of Sonazoid-CEUS, the differences in the contrast agent uptake between the tumorous and non-tumorous areas were quantified as the Kupffer phase ratio and compared. In the SPIO-MRI, it was quantified as the SPIO-intensity index. We then compared these results with the histological differentiation of HCCs. The Kupffer phase ratio decreased as the HCCs became less differentiated (P < 0.0001; Kruskal–Wallis test). The SPIO-intensity index also decreased as HCCs became less differentiated (P < 0.0001). A positive correlation was found between the Kupffer phase ratio and the SPIO-MRI index (r = 0.839). In the Kupffer phase of Sonazoid-CEUS, all of the moderately and poorly differentiated HCCs appeared hypoechoic and were detected as a perfusion defect, whereas the majority (9 of 13 cases, 69.2%) of the well-differentiated HCCs had an isoechoic pattern. The Kupffer phase images of Sonazoid-CEUS and SPIO-MRI matched perfectly (100%) in all of the moderately and poorly differentiated HCCs. Sonazoid-CEUS is useful for estimating histological grading of HCCs. It is a modality that could potentially replace SPIO-MRI.",
"title": ""
},
{
"docid": "9f1441bc10d7b0234a3736ce83d5c14b",
"text": "Conservation of genetic diversity, one of the three main forms of biodiversity, is a fundamental concern in conservation biology as it provides the raw material for evolutionary change and thus the potential to adapt to changing environments. By means of meta-analyses, we tested the generality of the hypotheses that habitat fragmentation affects genetic diversity of plant populations and that certain life history and ecological traits of plants can determine differential susceptibility to genetic erosion in fragmented habitats. Additionally, we assessed whether certain methodological approaches used by authors influence the ability to detect fragmentation effects on plant genetic diversity. We found overall large and negative effects of fragmentation on genetic diversity and outcrossing rates but no effects on inbreeding coefficients. Significant increases in inbreeding coefficient in fragmented habitats were only observed in studies analyzing progenies. The mating system and the rarity status of plants explained the highest proportion of variation in the effect sizes among species. The age of the fragment was also decisive in explaining variability among effect sizes: the larger the number of generations elapsed in fragmentation conditions, the larger the negative magnitude of effect sizes on heterozygosity. Our results also suggest that fragmentation is shifting mating patterns towards increased selfing. We conclude that current conservation efforts in fragmented habitats should be focused on common or recently rare species and mainly outcrossing species and outline important issues that need to be addressed in future research on this area.",
"title": ""
}
] | scidocsrr |
3cb19df8a8927abec692de0d2f258b47 | IoT Security Techniques Based on Machine Learning: How Do IoT Devices Use AI to Enhance Security? | [
{
"docid": "c2571afd6f2b9e9856c8f8c4eeb60b81",
"text": "In the Internet of Things, services can be provisioned using centralized architectures, where central entities acquire, process, and provide information. Alternatively, distributed architectures, where entities at the edge of the network exchange information and collaborate with each other in a dynamic way, can also be used. In order to understand the applicability and viability of this distributed approach, it is necessary to know its advantages and disadvantages – not only in terms of features but also in terms of security and privacy challenges. The purpose of this paper is to show that the distributed approach has various challenges that need to be solved, but also various interesting properties and strengths.",
"title": ""
},
{
"docid": "efe74721de3eda130957ce26435375a3",
"text": "Internet of Things (IoT) has been given a lot of emphasis since the 90s when it was first proposed as an idea of interconnecting different electronic devices through a variety of technologies. However, during the past decade IoT has rapidly been developed without appropriate consideration of the profound security goals and challenges involved. This study explores the security aims and goals of IoT and then provides a new classification of different types of attacks and countermeasures on security and privacy. It then discusses future security directions and challenges that need to be addressed to improve security concerns over such networks and aid in the wider adoption of IoT by masses.",
"title": ""
},
{
"docid": "9bafd07082066235a6b99f00e360b0d2",
"text": "Mobile devices have become a significant part of people’s lives, leading to an increasing number of users involved with such technology. The rising number of users invites hackers to generate malicious applications. Besides, the security of sensitive data available on mobile devices is taken lightly. Relying on currently developed approaches is not sufficient, given that intelligent malware keeps modifying rapidly and as a result becomes more difficult to detect. In this paper, we propose an alternative solution to evaluating malware detection using the anomaly-based approach with machine learning classifiers. Among the various network traffic features, the four categories selected are basic information, content based, time based and connection based. The evaluation utilizes two datasets: public (i.e. MalGenome) and private (i.e. self-collected). Based on the evaluation results, both the Bayes network and random forest classifiers produced more accurate readings, with a 99.97 % true-positive rate (TPR) as opposed to the multi-layer perceptron with only 93.03 % on the MalGenome dataset. However, this experiment revealed that the k-nearest neighbor classifier efficiently detected the latest Android malware with an 84.57 % truepositive rate higher than other classifiers. Communicated by V. Loia. F. A. Narudin · A. Gani Mobile Cloud Computing (MCC), University of Malaya, 50603 Kuala Lumpur, Malaysia A. Feizollah (B) · N. B. Anuar Security Research Group (SECReg), Faculty of Computer Science and Information Technology, University of Malaya, 50603 Kuala Lumpur, Malaysia e-mail: ali.feizollah@siswa.um.edu.my",
"title": ""
}
] | [
{
"docid": "048975c29cd23b08f414861d9804e900",
"text": "Diversity is a defining characteristic of global collectives facilitated by the Internet. Though substantial evidence suggests that diversity has profound implications for a variety of outcomes including performance, member engagement, and withdrawal behavior, the effects of diversity have been predominantly investigated in the context of organizational workgroups or virtual teams. We use a diversity lens to study the success of non-traditional virtual work groups exemplified by open source software (OSS) projects. Building on the diversity literature, we propose that three types of diversity (separation, variety and disparity) influence two critical outcomes for OSS projects: community engagement and market success. We draw on the OSS literature to further suggest that the effects of diversity on market success are moderated by the application development stage. We instantiate the operational definitions of three forms of diversity to the unique context of open source projects. Using archival data from 357 projects hosted on SourceForge, we find that disparity diversity, reflecting variation in participants' contribution-based reputation, is positively associated with success. The impact of separation diversity, conceptualized as culture and measured as diversity in the spoken language and country of participants, has a negative impact on community engagement but an unexpected positive effect on market success. Variety diversity, reflected in dispersion in project participant roles, positively influences community engagement and market success. The impact of diversity on market success is conditional on the development stage of the project. We discuss how the study's findings advance the literature on antecedents of OSS success, expand our theoretical understanding of diversity, and present the practical implications of the results for managers of distributed collectives.",
"title": ""
},
{
"docid": "02936143b0da0a789fc1c645e30c7e50",
"text": "We describe a robust accurate domain-independent approach t statistical parsing incorporated into the new release of the ANLT toolkit, and publicly available as a research tool. The system has bee n used to parse many well known corpora in order to produce dat a for lexical acquisition efforts; it has also been used as a component in a open-domain question answering project. The performance of the system is competitive with that of statistical parsers using highl y lexicalised parse selection models. However, we plan to ex end the system to improve parse coverage, depth and accuracy.",
"title": ""
},
{
"docid": "653ca5c9478b1b1487fc24eeea8c1677",
"text": "A fundamental question in information theory and in computer science is how to measure similarity or the amount of shared information between two sequences. We have proposed a metric, based on Kolmogorov complexity, to answer this question and have proven it to be universal. We apply this metric in measuring the amount of shared information between two computer programs, to enable plagiarism detection. We have designed and implemented a practical system SID (Software Integrity Diagnosis system) that approximates this metric by a heuristic compression algorithm. Experimental results demonstrate that SID has clear advantages over other plagiarism detection systems. SID system server is online at http://software.bioinformatics.uwaterloo.ca/SID/.",
"title": ""
},
{
"docid": "874cff80953c4a1e929134ce59cb1fee",
"text": "Automatically detecting controversy on the Web is a useful capability for a search engine to help users review web content with a more balanced and critical view. The current state-of-the art approach is to find K-Nearest-Neighbors in Wikipedia to the document query, and to aggregate their controversy scores that are automatically computed from the Wikipedia edit-history features. In this paper, we discover two major weakness in the prior work and propose modifications. First, the generated single query from document to find KNN Wikipages easily becomes ambiguous. Thus, we propose to generate multiple queries from smaller but more topically coherent paragraph of the document. Second, the automatically computed controversy scores of Wikipedia articles that depend on \"edit war\" features have a drawback that without an edit history, there can be no edit wars. To infer more reliable controversy scores for articles with little edit history, we smooth the original score from the scores of the neighbors with more established edit history. We show that the modified framework is improved by up to 5% for binary controversy classification in a publicly available dataset.",
"title": ""
},
{
"docid": "ab1c7ede012bd20f30bab66fcaec49fa",
"text": "Visual-inertial navigation systems (VINS) have prevailed in various applications, in part because of the complementary sensing capabilities and decreasing costs as well as sizes. While many of the current VINS algorithms undergo inconsistent estimation, in this paper we introduce a new extended Kalman filter (EKF)-based approach towards consistent estimates. To this end, we impose both state-transition and obervability constraints in computing EKF Jacobians so that the resulting linearized system can best approximate the underlying nonlinear system. Specifically, we enforce the propagation Jacobian to obey the semigroup property, thus being an appropriate state-transition matrix. This is achieved by parametrizing the orientation error state in the global, instead of local, frame of reference, and then evaluating the Jacobian at the propagated, instead of the updated, state estimates. Moreover, the EKF linearized system ensures correct observability by projecting the most-accurate measurement Jacobian onto the observable subspace so that no spurious information is gained. The proposed algorithm is validated by both Monte-Carlo simulation and real-world experimental tests.",
"title": ""
},
{
"docid": "b98c34a4be7f86fb9506a6b1620b5d3e",
"text": "A portable civilian GPS spoofer is implemented on a digital signal processor and used to characterize spoofing effects and develop defenses against civilian spoofing. This work is intended to equip GNSS users and receiver manufacturers with authentication methods that are effective against unsophisticated spoofing attacks. The work also serves to refine the civilian spoofing threat assessment by demonstrating the challenges involved in mounting a spoofing attack.",
"title": ""
},
{
"docid": "1fe202e68aa2196f8e739173fa94b657",
"text": "Efficient formulations for the dynamics of continuum robots are necessary to enable accurate modeling of the robot's shape during operation. Previous work in continuum robotics has focused on low-fidelity lumped parameter models, in which actuated segments are modeled as circular arcs, or computationally intensive high-fidelity distributed parameter models, in which continuum robots are modeled as a parameterized spatial curve. In this paper, a novel dynamic modeling methodology is studied that captures curvature variations along a segment using a finite set of kinematic variables. This dynamic model is implemented using the principle of virtual power (also called Kane's method) for a continuum robot. The model is derived to account for inertial, actuation, friction, elastic, and gravitational effects. The model is inherently adaptable for including any type of external force or moment, including dissipative effects and external loading. Three case studies are simulated on a cable-driven continuum robot structure to study the dynamic properties of the numerical model. Cross validation is performed in comparison to both experimental results and finite-element analysis.",
"title": ""
},
{
"docid": "22ef70869ce47993bbdf24b18b6988f5",
"text": "Recent results suggest that it is possible to grasp a variety of singulated objects with high precision using Convolutional Neural Networks (CNNs) trained on synthetic data. This paper considers the task of bin picking, where multiple objects are randomly arranged in a heap and the objective is to sequentially grasp and transport each into a packing box. We model bin picking with a discrete-time Partially Observable Markov Decision Process that specifies states of the heap, point cloud observations, and rewards. We collect synthetic demonstrations of bin picking from an algorithmic supervisor uses full state information to optimize for the most robust collision-free grasp in a forward simulator based on pybullet to model dynamic object-object interactions and robust wrench space analysis from the Dexterity Network (Dex-Net) to model quasi-static contact between the gripper and object. We learn a policy by fine-tuning a Grasp Quality CNN on Dex-Net 2.1 to classify the supervisor’s actions from a dataset of 10,000 rollouts of the supervisor in the simulator with noise injection. In 2,192 physical trials of bin picking with an ABB YuMi on a dataset of 50 novel objects, we find that the resulting policies can achieve 94% success rate and 96% average precision (very few false positives) on heaps of 5-10 objects and can clear heaps of 10 objects in under three minutes. Datasets, experiments, and supplemental material are available at http://berkeleyautomation.github.io/dex-net.",
"title": ""
},
{
"docid": "3c667426c8dcea8e7813e9eef23a1e15",
"text": "Radio spectrum has become a precious resource, and it has long been the dream of wireless communication engineers to maximize the utilization of the radio spectrum. Dynamic Spectrum Access (DSA) and Cognitive Radio (CR) have been considered promising to enhance the efficiency and utilization of the spectrum. In current overlay cognitive radio, spectrum sensing is first performed to detect the spectrum holes for the secondary user to harness. However, in a more sophisticated cognitive radio, the secondary user needs to detect more than just the existence of primary users and spectrum holes. For example, in a hybrid overlay/underlay cognitive radio, the secondary use needs to detect the transmission power and localization of the primary users as well. In this paper, we combine the spectrum sensing and primary user power/localization detection together, and propose to jointly detect not only the existence of primary users but the power and localization of them via compressed sensing. Simulation results including the miss detection probability (MDP), false alarm probability (FAP) and reconstruction probability (RP) confirm the effectiveness and robustness of the proposed method.",
"title": ""
},
{
"docid": "03f98b18392bd178ea68ce19b13589fa",
"text": "Neural network techniques are widely used in network embedding, boosting the result of node classification, link prediction, visualization and other tasks in both aspects of efficiency and quality. All the state of art algorithms put effort on the neighborhood information and try to make full use of it. However, it is hard to recognize core periphery structures simply based on neighborhood. In this paper, we first discuss the influence brought by random-walk based sampling strategies to the embedding results. Theoretical and experimental evidences show that random-walk based sampling strategies fail to fully capture structural equivalence. We present a new method, SNS, that performs network embeddings using structural information (namely graphlets) to enhance its quality. SNS effectively utilizes both neighbor information and local-subgraphs similarity to learn node embeddings. This is the first framework that combines these two aspects as far as we know, positively merging two important areas in graph mining and machine learning. Moreover, we investigate what kinds of local-subgraph features matter the most on the node classification task, which enables us to further improve the embedding quality. Experiments show that our algorithm outperforms other unsupervised and semi-supervised neural network embedding algorithms on several real-world datasets.",
"title": ""
},
{
"docid": "7fa1ebea0989f7a6b8c0396bce54a54d",
"text": "Linear Discriminant Analysis (LDA) is a very common technique for dimensionality reduction problems as a preprocessing step for machine learning and pattern classification applications. At the same time, it is usually used as a black box, but (sometimes) not well understood. The aim of this paper is to build a solid intuition for what is LDA, and how LDA works, thus enabling readers of all levels be able to get a better understanding of the LDA and to know how to apply this technique in different applications. The paper first gave the basic definitions and steps of how LDA technique works supported with visual explanations of these steps. Moreover, the two methods of computing the LDA space, i.e. class-dependent and class-independent methods, were explained in details. Then, in a step-by-step approach, two numerical examples are demonstrated to show how the LDA space can be calculated in case of the class-dependent and class-independent methods. Furthermore, two of the most common LDA problems (i.e. Small Sample Size (SSS) and non-linearity problems) were highlighted and illustrated, and stateof-the-art solutions to these problems were investigated and explained. Finally, a number of experiments was conducted with different datasets to (1) investigate the effect of the eigenvectors that used in the LDA space on the robustness of the extracted feature for the classification accuracy, and (2) to show when the SSS problem occurs and how it can be addressed.",
"title": ""
},
{
"docid": "021243b584395d190e191e0713fe4a5c",
"text": "Convolutional neural networks (CNNs) have achieved remarkable performance in a wide range of computer vision tasks, typically at the cost of massive computational complexity. The low speed of these networks may hinder real-time applications especially when computational resources are limited. In this paper, an efficient and effective approach is proposed to accelerate the test-phase computation of CNNs based on low-rank and group sparse tensor decomposition. Specifically, for each convolutional layer, the kernel tensor is decomposed into the sum of a small number of low multilinear rank tensors. Then we replace the original kernel tensors in all layers with the approximate tensors and fine-tune the whole net with respect to the final classification task using standard backpropagation. \\\\ Comprehensive experiments on ILSVRC-12 demonstrate significant reduction in computational complexity, at the cost of negligible loss in accuracy. For the widely used VGG-16 model, our approach obtains a 6.6$\\times$ speed-up on PC and 5.91$\\times$ speed-up on mobile device of the whole network with less than 1\\% increase on top-5 error.",
"title": ""
},
{
"docid": "74e40c5cb4e980149906495da850d376",
"text": "Universal schema predicts the types of entities and relations in a knowledge base (KB) by jointly embedding the union of all available schema types—not only types from multiple structured databases (such as Freebase or Wikipedia infoboxes), but also types expressed as textual patterns from raw text. This prediction is typically modeled as a matrix completion problem, with one type per column, and either one or two entities per row (in the case of entity types or binary relation types, respectively). Factorizing this sparsely observed matrix yields a learned vector embedding for each row and each column. In this paper we explore the problem of making predictions for entities or entity-pairs unseen at training time (and hence without a pre-learned row embedding). We propose an approach having no per-row parameters at all; rather we produce a row vector on the fly using a learned aggregation function of the vectors of the observed columns for that row. We experiment with various aggregation functions, including neural network attention models. Our approach can be understood as a natural language database, in that questions about KB entities are answered by attending to textual or database evidence. In experiments predicting both relations and entity types, we demonstrate that despite having an order of magnitude fewer parameters than traditional universal schema, we can match the accuracy of the traditional model, and more importantly, we can now make predictions about unseen rows with nearly the same accuracy as rows available at training time.",
"title": ""
},
{
"docid": "ab1b9e358d10fc091e8c7eedf4674a8a",
"text": "An effective and efficient defect inspection system for TFT-LCD polarised films using adaptive thresholds and shape-based image analyses Chung-Ho Noha; Seok-Lyong Leea; Deok-Hwan Kimb; Chin-Wan Chungc; Sang-Hee Kimd a School of Industrial and Management Engineering, Hankuk University of Foreign Studies, Yonginshi, Korea b School of Electronics Engineering, Inha University, Yonghyun-dong, Incheon-shi, Korea c Division of Computer Science, KAIST, Daejeon-shi, Korea d Key Technology Research Center, Agency for Defense Development, Daejeon-shi, Korea",
"title": ""
},
{
"docid": "6cdab4de3682ef027c9daf22a05438e1",
"text": "This paper proposes a new method that combines the intensity and motion information to detect scene changes such as abrupt scene changes and gradual scene changes. Two major features are chosen as the basic dissimilarity measures, and selfand cross-validation mechanisms are employed via a static scene test. We also develop a novel intensity statistics model for detecting gradualscenechanges.Experimental resultsshowthat theproposed algorithms are effective and outperform the previous approaches.",
"title": ""
},
{
"docid": "e5bad6942b0afa06f3a87e3c9347bf13",
"text": "We present a monocular 3D reconstruction algorithm for inextensible deformable surfaces. It uses point correspondences between a single image of the deformed surface taken by a camera with known intrinsic parameters and a template. The main assumption we make is that the surface shape as seen in the template is known. Since the surface is inextensible, its deformations are isometric to the template. We exploit the distance preservation constraints to recover the 3D surface shape as seen in the image. Though the distance preservation constraints have already been investigated in the literature, we propose a new way to handle them. Spatial smoothness priors are easily incorporated, as well as temporal smoothness priors in the case of reconstruction from a video. The reconstruction can be used for 3D augmented reality purposes thanks to a fast implementation. We report results on synthetic and real data. Some of them are compared to stereo-based 3D reconstructions to demonstrate the efficiency of our method.",
"title": ""
},
{
"docid": "ae508747717b9e8e149b5f91bb454c96",
"text": "Social robots are robots that help people as capable partners rather than as tools, are believed to be of greatest use for applications in entertainment, education, and healthcare because of their potential to be perceived as trusting, helpful, reliable, and engaging. This paper explores how the robot's physical presence influences a person's perception of these characteristics. The first study reported here demonstrates the differences between a robot and an animated character in terms a person's engagement and perceptions of the robot and character. The second study shows that this difference is a result of the physical presence of the robot and that a person's reactions would be similar even if the robot is not physically collocated. Implications to the design of socially communicative and interactive robots are discussed.",
"title": ""
},
{
"docid": "e40eb32613ed3077177d61ac14e82413",
"text": "Preamble. Billions of people are using cell phone devices on the planet, essentially in poor posture. The purpose of this study is to assess the forces incrementally seen by the cervical spine as the head is tilted forward, into worsening posture. This data is also necessary for cervical spine surgeons to understand in the reconstruction of the neck.",
"title": ""
},
{
"docid": "7a8a98b91680cbc63594cd898c3052c8",
"text": "Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.",
"title": ""
},
{
"docid": "7332ba6aff8c966d76b1c8f451a02ccf",
"text": "A light-emitting diode (LED) driver compatible with fluorescent lamp (FL) ballasts is presented for a lamp-only replacement without rewiring the existing lamp fixture. Ballasts have a common function to regulate the lamp current, despite widely different circuit topologies. In this paper, magnetic and electronic ballasts are modeled as nonideal current sources and a current-sourced boost converter, which is derived from the duality, is adopted for the power conversion from ballasts. A rectifier circuit with capacitor filaments is proposed to interface the converter with the four-wire output of the ballast. A digital controller emulates the high-voltage discharge of the FL and operates adaptively with various ballasts. A prototype 20-W LED driver for retrofitting T8 36-W FL is evaluated with both magnetic and electronic ballasts. In addition to wide compatibility, accurate regulation of the LED current within 0.6% error and high driver efficiency over 89.7% are obtained.",
"title": ""
}
] | scidocsrr |
e93528333487ee373bd5e04bd8f0ff6b | Automatically Mapping and Integrating Multiple Data Entry Forms into a Database | [
{
"docid": "faf4f549186bffc799ce545bbc3d320e",
"text": "In many applications it is important to find a meaningful relationship between the schemas of a source and target database. This relationship is expressed in terms of declarative logical expressions called schema mappings. The more successful previous solutions have relied on inputs such as simple element correspondences between schemas in addition to local schema constraints such as keys and referential integrity. In this paper, we investigate the use of an alternate source of information about schemas, namely the presumed presence of semantics for each table, expressed in terms of a conceptual model (CM) associated with it. Our approach first compiles each CM into a graph and represents each table's semantics as a subtree in it. We then develop algorithms for discovering subgraphs that are plausible connections between those concepts/nodes in the CM graph that have attributes participating in element correspondences. A conceptual mapping candidate is now a pair of source and target subgraphs which are semantically similar. At the end, these are converted to expressions at the database level. We offer experimental results demonstrating that, for test cases of non-trivial mapping expressions involving schemas from a number of domains, the \"semantic\" approach outperforms the traditional technique in terms of recall and especially precision.",
"title": ""
}
] | [
{
"docid": "14551a9e92dc9ce47e2f80a8fc4dd741",
"text": "We model a simple genetic algorithm as a Markov chain. Our method is both complete (selection, mutation, and crossover are incorporated into an explicitly given transition matrix) and exact; no special assumptions are made which restrict populations or population trajectories. We also consider the asymptotics of the steady state distributions as population size increases.",
"title": ""
},
{
"docid": "a49ea9c9f03aa2d926faa49f4df63b7a",
"text": "Deep stacked RNNs are usually hard to train. Recent studies have shown that shortcut connections across different RNN layers bring substantially faster convergence. However, shortcuts increase the computational complexity of the recurrent computations. To reduce the complexity, we propose the shortcut block, which is a refinement of the shortcut LSTM blocks. Our approach is to replace the self-connected parts (ct) with shortcuts (hl−2 t ) in the internal states. We present extensive empirical experiments showing that this design performs better than the original shortcuts. We evaluate our method on CCG supertagging task, obtaining a 8% relatively improvement over current state-of-the-art results.",
"title": ""
},
{
"docid": "e2cd2edc74d932f1632a858ac124f902",
"text": "Large writes are beneficial both on individual disks and on disk arrays, e.g., RAID-5. The presented design enables large writes of internal B-tree nodes and leaves. It supports both in-place updates and large append-only (“log-structured”) write operations within the same storage volume, within the same B-tree, and even at the same time. The essence of the proposal is to make page migration inexpensive, to migrate pages while writing them, and to make such migration optional rather than mandatory as in log-structured file systems. The inexpensive page migration also aids traditional defragmentation as well as consolidation of free space needed for future large writes. These advantages are achieved with a very limited modification to conventional B-trees that also simplifies other B-tree operations, e.g., key range locking and compression. Prior proposals and prototypes implemented transacted B-tree on top of log-structured file systems and added transaction support to log-structured file systems. Instead, the presented design adds techniques and performance characteristics of log-structured file systems to traditional B-trees and their standard transaction support, notably without adding a layer of indirection for locating B-tree nodes on disk. The result retains fine-granularity locking, full transactional ACID guarantees, fast search performance, etc. expected of a modern B-tree implementation, yet adds efficient transacted page relocation and large, high-bandwidth writes.",
"title": ""
},
{
"docid": "daaa048824f1fa8303a2f4ac95301ccc",
"text": "The Internet of Things (IoT) represents a diverse technology and usage with unprecedented business opportunities and risks. The Internet of Things is changing the dynamics of security industry & reshaping it. It allows data to be transferred seamlessly among physical devices to the Internet. The growth of number of intelligent devices will create a network rich with information that allows supply chains to assemble and communicate in new ways. The technology research firm Gartner predicts that there will be 26 billion installed units on the Internet of Things (IoT) by 2020[1]. This paper explains the concept of Internet of Things (IoT), its characteristics, explain security challenges, technology adoption trends & suggests a reference architecture for E-commerce enterprise.",
"title": ""
},
{
"docid": "6c5c6e201e2ae886908aff554866b9ed",
"text": "HDBSCAN: Hierarchical Density-Based Spatial Clustering of Applications with Noise (Campello, Moulavi, and Sander 2013), (Campello et al. 2015). Performs DBSCAN over varying epsilon values and integrates the result to find a clustering that gives the best stability over epsilon. This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN), and be more robust to parameter selection. The library also includes support for Robust Single Linkage clustering (Chaudhuri et al. 2014), (Chaudhuri and Dasgupta 2010), GLOSH outlier detection (Campello et al. 2015), and tools for visualizing and exploring cluster structures. Finally support for prediction and soft clustering is also available.",
"title": ""
},
{
"docid": "ca9f1a955ad033e43d25533d37f50b88",
"text": "Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. This work addresses several important tasks of visualizing and predicting short term text representation shift, i.e. the change in a word’s contextual semantics. We study the relationship between short-term concept drift and representation shift on a large social media corpus – VKontakte collected during the Russia-Ukraine crisis in 2014 – 2015. We visualize short-term representation shift for example keywords and build predictive models to forecast short-term shifts in meaning from previous meaning as well as from concept drift. We show that short-term representation shift can be accurately predicted up to several weeks in advance and that visualization provides insight into meaning change. Our approach can be used to explore and characterize specific aspects of the streaming corpus during crisis events and potentially improve other downstream classification tasks including real-time event forecasting in social media.",
"title": ""
},
{
"docid": "f8fe22b2801a250a52e3d19ae23804e9",
"text": "Human movements contribute to the transmission of malaria on spatial scales that exceed the limits of mosquito dispersal. Identifying the sources and sinks of imported infections due to human travel and locating high-risk sites of parasite importation could greatly improve malaria control programs. Here, we use spatially explicit mobile phone data and malaria prevalence information from Kenya to identify the dynamics of human carriers that drive parasite importation between regions. Our analysis identifies importation routes that contribute to malaria epidemiology on regional spatial scales.",
"title": ""
},
{
"docid": "3240607824a6dace92925e75df92cc09",
"text": "We propose a framework to model general guillotine restrictions in two-dimensional cutting problems formulated as Mixed Integer Linear Programs (MIP). The modeling framework requires a pseudo-polynomial number of variables and constraints, which can be effectively enumerated for medium-size instances. Our modeling of general guillotine cuts is the first one that, once it is implemented within a state-of-the-art MIP solver, can tackle instances of challenging size. We mainly concentrate our analysis on the Guillotine Two Dimensional Knapsack Problem (G2KP), for which a model, and an exact procedure able to significantly improve the computational performance, are given. We also show how the modeling of general guillotine cuts can be extended to other relevant problems such as the Guillotine Two Dimensional Cutting Stock Problem (G2CSP) and the Guillotine Strip Packing Problem (GSPP). Finally, we conclude the paper discussing an extensive set of computational experiments on G2KP and GSPP benchmark instances from the literature.",
"title": ""
},
{
"docid": "c4d0a1cd8a835dc343b456430791035b",
"text": "Social networks offer an invaluable amount of data from which useful information can be obtained on the major issues in society, among which crime stands out. Research about information extraction of criminal events in Social Networks has been done primarily in English language, while in Spanish, the problem has not been addressed. This paper propose a system for extracting spatio-temporally tagged tweets about crime events in Spanish language. In order to do so, it uses a thesaurus of criminality terms and a NER (named entity recognition) system to process the tweets and extract the relevant information. The NER system is based on the implementation OSU Twitter NLP Tools, which has been enhanced for Spanish language. Our results indicate an improved performance in relation to the most relevant tools such as Standford NER and OSU Twitter NLP Tools, achieving 80.95% precision, 59.65% recall and 68.69% F-measure. The end result shows the crime information broken down by place, date and crime committed through a webservice.",
"title": ""
},
{
"docid": "df48f9d3096d8528e9f517783a044df8",
"text": "We propose a novel generative neural network architecture for Dialogue Act classification. Building upon the Recurrent Neural Network framework, our model incorporates a new attentional technique and a label-to-label connection for sequence learning, akin to Hidden Markov Models. Our experiments show that both of these innovations enable our model to outperform strong baselines for dialogue-act classification on the MapTask and Switchboard corpora. In addition, we analyse empirically the effectiveness of each of these innovations.",
"title": ""
},
{
"docid": "bb9f5ab961668b8aac5f786d33fb7e1f",
"text": "The process that resulted in the diagnostic criteria for posttraumatic stress disorder (PTSD) in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5; American Psychiatric Association; ) was empirically based and rigorous. There was a high threshold for any changes in any DSM-IV diagnostic criterion. The process is described in this article. The rationale is presented that led to the creation of the new chapter, \"Trauma- and Stressor-Related Disorders,\" within the DSM-5 metastructure. Specific issues discussed about the DSM-5 PTSD criteria themselves include a broad versus narrow PTSD construct, the decisions regarding Criterion A, the evidence supporting other PTSD symptom clusters and specifiers, the addition of the dissociative and preschool subtypes, research on the new criteria from both Internet surveys and the DSM-5 field trials, the addition of PTSD subtypes, the noninclusion of complex PTSD, and comparisons between DSM-5 versus the World Health Association's forthcoming International Classification of Diseases (ICD-11) criteria for PTSD. The PTSD construct continues to evolve. In DSM-5, it has moved beyond a narrow fear-based anxiety disorder to include dysphoric/anhedonic and externalizing PTSD phenotypes. The dissociative subtype may open the way to a fresh approach to complex PTSD. The preschool subtype incorporates important developmental factors affecting the expression of PTSD in young children. Finally, the very different approaches taken by DSM-5 and ICD-11 should have a profound effect on future research and practice.",
"title": ""
},
{
"docid": "d8fab661721e70a64fac930343203d20",
"text": "Studies of a range of higher cognitive functions consistently activate a region of anterior cingulate cortex (ACC), typically posterior to the genu and superior to the corpus collosum. In particular, this ACC region appears to be active in task situations where there is a need to override a prepotent response tendency, when responding is underdetermined, and when errors are made. We have hypothesized that the function of this ACC region is to monitor for the presence of crosstalk or competition between incompatible responses. In prior work, we provided initial support for this hypothesis, demonstrating ACC activity in the same region both during error trials and during correct trials in task conditions designed to elicit greater response competition. In the present study, we extend our testing of this hypothesis to task situations involving underdetermined responding. Specifically, 14 healthy control subjects performed a verb-generation task during event-related functional magnetic resonance imaging (fMRI), with the on-line acquisition of overt verbal responses. The results demonstrated that the ACC, and only the ACC, was more active in a series of task conditions that elicited competition among alternative responses. These conditions included a greater ACC response to: (1) Nouns categorized as low vs. high constraint (i.e., during a norming study, multiple verbs were produced with equal frequency vs. a single verb that produced much more frequently than any other); (2) the production of verbs that were weak associates, rather than, strong associates of particular nouns; and (3) the production of verbs that were weak associates for nouns categorized as high constraint. We discuss the implication of these results for understanding the role that the ACC plays in human cognition.",
"title": ""
},
{
"docid": "448f12ead2cae05dbb2a19e3d565a8f5",
"text": "This paper presents a feature extraction technique based on the Hilbert-Huang Transform (HHT) method for emotion recognition from physiological signals. Four kinds of physiological signals were used for analysis: electrocardiogram (ECG), electromyogram (EMG), skin conductivity (SC) and respiration changes (RSP). Each signal is decomposed into a finite set of AM-FM mono components (fission process) by the Empirical Mode Decomposition (EMD) which is the key part of the HHT. The information components of interest are then combined to create feature vectors (fusion process) for the next classification stage. In addition, classification is performed by using Support Vector Machines (SVM). The classification scores show that HHT based methods outperform traditional statistical techniques and provide a promising framework for both analysis and recognition of physiological signals in emotion recognition.",
"title": ""
},
{
"docid": "89d736c68d2befba66a0b7d876e52502",
"text": "The optical properties of human skin, subcutaneous adipose tissue and human mucosa were measured in the wavelength range 400–2000 nm. The measurements were carried out using a commercially available spectrophotometer with an integrating sphere. The inverse adding–doubling method was used to determine the absorption and reduced scattering coefficients from the measurements.",
"title": ""
},
{
"docid": "bed6312dd677fa37c30e72d0383973ed",
"text": " Fig.1にマスタリーラーニングのアウトラインを示す。 初めに教師はカリキュラムや教材をコンセプトやアイディアが重要であるためレビューする必要がある。 次に教師による診断手段や診断プロセスという形式的評価の計画である。また学習エラーを改善するための Corrective Activitiesの計画の主要な援助でもある。 Corrective Activites 矯正活動にはさまざまな形がとられる。Peer Cross-age Tutoring、コンピュータ支援レッスンなど Enrichment Activities 問題解決練習の特別なtutoringであり、刺激的で早熟な学習者に実りのある学習となっている。 Formative Assesment B もしCorrective Activitiesが学習者を改善しているのならばこの2回目の評価では体得を行っている。 この2回目の評価は学習者に改善されていることや良い学習者になっていることを示し、強力なモチベーショ ンのデバイスとなる。最後は累積的試験または評価の開発がある。",
"title": ""
},
{
"docid": "e8d0eab8c5ea4c3186499aa13cc6fc56",
"text": "A new multiple-input dc-dc converter realized from a modified inverse Watkins-Johnson topology is presented and analyzed. Fundamental electrical characteristics are presented and power budget equations are derived. Small signal analysis model of the propose converter is presented and studied. Two possible operation methods to achieve output voltage regulation are presented here. The analysis is verified with simulations and experiments on a prototype circuit.",
"title": ""
},
{
"docid": "0ec7ac1f00fb20854d622982d28f9056",
"text": "The structure of an air-core cylindrical high voltage pulse transformer is relatively simple, but considerable attention is needed to prevent breakdown between transformer windings. Since the thickness of the spiral windings is on the order of sub-millimeter, field enhancement at the edges of the windings is very high. Therefore, it is important to have proper electrical insulations to prevent breakdown at the edges and to make the system compact. Major design parameters of the transformer are primary inductance of 170 nH, and output voltage of about 500 kV. The fabricated transformer is 45 cm in length and 30 cm in diameter. The fabricated transformer is tested up to 450 kV with a Marx generator. In this paper, we will discuss design and fabrication procedures, and preliminary test results of the air-core cylindrical HV pulse transformer",
"title": ""
},
{
"docid": "e56173228f9d5b89e4173bc83e73d3d2",
"text": "The categorization of gender identity variants (GIVs) as \"mental disorders\" in the Diagnostic and Statistical Manual of Mental Disorders (DSM) of the American Psychiatric Association is highly controversial among professionals as well as among persons with GIV. After providing a brief history of GIV categorizations in the DSM, this paper presents some of the major issues of the ongoing debate: GIV as psychopathology versus natural variation; definition of \"impairment\" and \"distress\" for GID; associated psychopathology and its relation to stigma; the stigma impact of the mental-disorder label itself; the unusual character of \"sex reassignment surgery\" as a psychiatric treatment; and the consequences for health and mental-health services if the disorder label is removed. Finally, several categorization options are examined: Retaining the GID category, but possibly modifying its grouping with other syndromes; narrowing the definition to dysphoria and taking \"disorder\" out of the label; categorizing GID as a neurological or medical rather than a psychiatric disorder; removing GID from both the DSM and the International Classification of Diseases (ICD); and creating a special category for GIV in the DSM. I conclude that-as also evident in other DSM categories-the decision on the categorization of GIVs cannot be achieved on a purely scientific basis, and that a consensus for a pragmatic compromise needs to be arrived at that accommodates both scientific considerations and the service needs of persons with GIVs.",
"title": ""
},
{
"docid": "f649a975dcec02ea82bebb95dafd5eab",
"text": "Online games have emerged as popular computer applications and gamer loyalty is vital to game providers, since online gamers frequently switch between games. Online gamers often participate in teams also. This study investigates whether and how team participation improves loyalty. We utilized a cross-sectional design and an online survey, with 546 valid responses from online game subjects. Confirmatory factor analysis was applied to assess measurement reliability and validity directly, and structural equation modeling was utilized to test our hypotheses. The results indicate that participation in teams motivates online gamers to adhere to team norms and satisfies their social needs, also enhancing their loyalty. The contribution of this research is the introduction of social norms to explain online gamer loyalty. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3de0b9a5d5241893d8a3de4b723e5140",
"text": "One of the emerging networking standards that gap between the physical world and the cyber one is the Internet of Things. In the Internet of Things, smart objects communicate with each other, data are gathered and certain requests of users are satisfied by different queried data. The development of energy efficient schemes for the IoT is a challenging issue as the IoT becomes more complex due to its large scale the current techniques of wireless sensor networks cannot be applied directly to the IoT. To achieve the green networked IoT, this paper addresses energy efficiency issues by proposing a novel deployment scheme. This scheme, introduces: (1) a hierarchical network design; (2) a model for the energy efficient IoT; (3) a minimum energy consumption transmission algorithm to implement the optimal model. The simulation results show that the new scheme is more energy efficient and flexible than traditional WSN schemes and consequently it can be implemented for efficient communication in the IoT.",
"title": ""
}
] | scidocsrr |
4eec4732e5cfa7dc0ec716a7e2475a23 | Time delay deep neural network-based universal background models for speaker recognition | [
{
"docid": "b7597e1f8c8ae4b40f5d7d1fe1f76a38",
"text": "In this paper we present a Time-Delay Neural Network (TDNN) approach to phoneme recognition which is characterized by two important properties. 1) Using a 3 layer arrangement of simple computing units, a hierarchy can be constructed that allows for the formation of arbitrary nonlinear decision surfaces. The TDNN learns these decision surfaces automatically using error backpropagation 111. 2) The time-delay arrangement enables the network to discover acoustic-phonetic features and the temporal relationships between them independent of position in time and hence not blurred by temporal shifts",
"title": ""
},
{
"docid": "86f478fbf4e38ce1f1d0119a3175adfe",
"text": "We introduce recurrent neural networks (RNNs) for acoustic modeling which are unfolded in time for a fixed number of time steps. The proposed models are feedforward networks with the property that the unfolded layers which correspond to the recurrent layer have time-shifted inputs and tied weight matrices. Besides the temporal depth due to unfolding, hierarchical processing depth is added by means of several non-recurrent hidden layers inserted between the unfolded layers and the output layer. The training of these models: (a) has a complexity that is comparable to deep neural networks (DNNs) with the same number of layers; (b) can be done on frame-randomized minibatches; (c) can be implemented efficiently through matrix-matrix operations on GPU architectures which makes it scalable for large tasks. Experimental results on the Switchboard 300 hours English conversational telephony task show a 5% relative improvement in word error rate over state-of-the-art DNNs trained on FMLLR features with i-vector speaker adaptation and hessianfree sequence discriminative training.",
"title": ""
}
] | [
{
"docid": "80d920f1f886b81e167d33d5059b8afe",
"text": "Agriculture is one of the most important aspects of human civilization. The usages of information and communication technologies (ICT) have significantly contributed in the area in last two decades. Internet of things (IOT) is a technology, where real life physical objects (e.g. sensor nodes) can work collaboratively to create an information based and technology driven system to maximize the benefits (e.g. improved agricultural production) with minimized risks (e.g. environmental impact). Implementation of IOT based solutions, at each phase of the area, could be a game changer for whole agricultural landscape, i.e. from seeding to selling and beyond. This article presents a technical review of IOT based application scenarios for agriculture sector. The article presents a brief introduction to IOT, IOT framework for agricultural applications and discusses various agriculture specific application scenarios, e.g. farming resource optimization, decision support system, environment monitoring and control systems. The article concludes with the future research directions in this area.",
"title": ""
},
{
"docid": "7e07856be3374b4eed585e430d236ebc",
"text": "Exploiting the complex structure of relational data enables to build better models by taking into account the additional information provided by the links between objects. We extend this idea to the Semantic Web by introducing our novel SPARQL-ML approach to perform data mining for Semantic Web data. Our approach is based on traditional SPARQL and statistical relational learning methods, such as Relational Probability Trees and Relational Bayesian Classifiers. We analyze our approach thoroughly conducting three sets of experiments on synthetic as well as real-world data sets. Our analytical results show that our approach can be used for any Semantic Web data set to perform instance-based learning and classification. A comparison to kernel methods used in Support Vector Machines shows that our approach is superior in terms of classification accuracy. Adding Data Mining Support to SPARQL via Statistical Relational Learning Methods Christoph Kiefer, Abraham Bernstein, and André Locher Department of Informatics, University of Zurich, Switzerland {kiefer,bernstein}@ifi.uzh.ch, andre@outerlimits.ch Abstract. Exploiting the complex structure of relational data enables to build better models by taking into account the additional information provided by the links between objects. We extend this idea to the Semantic Web by introducing our novel SPARQL-ML approach to perform data mining for Semantic Web data. Our approach is based on traditional SPARQL and statistical relational learning methods, such as Relational Probability Trees and Relational Bayesian Classifiers. We analyze our approach thoroughly conducting three sets of experiments on synthetic as well as real-world data sets. Our analytical results show that our approach can be used for any Semantic Web data set to perform instance-based learning and classification. A comparison to kernel methods used in Support Vector Machines shows that our approach is superior in terms of classification accuracy. Exploiting the complex structure of relational data enables to build better models by taking into account the additional information provided by the links between objects. We extend this idea to the Semantic Web by introducing our novel SPARQL-ML approach to perform data mining for Semantic Web data. Our approach is based on traditional SPARQL and statistical relational learning methods, such as Relational Probability Trees and Relational Bayesian Classifiers. We analyze our approach thoroughly conducting three sets of experiments on synthetic as well as real-world data sets. Our analytical results show that our approach can be used for any Semantic Web data set to perform instance-based learning and classification. A comparison to kernel methods used in Support Vector Machines shows that our approach is superior in terms of classification accuracy.",
"title": ""
},
{
"docid": "e52bac5b665aae5cf020538ab37356bc",
"text": "The greater decrease of conduction velocity in sensory than in motor fibres of the peroneal, median and ulnar nerves (particularly in the digital segments) found in patients with chronic carbon disulphide poisoning, permitted the diagnosis of polyneuropathy to be made in the subclinical stage, even while the conduction in motor fibres was still within normal limits. A process of axonal degeneration is presumed to underlie occurrence of neuropathy consequent to carbon disulphide poisoning.",
"title": ""
},
{
"docid": "c47c2f7c7958843d67d19837ba081b16",
"text": "Research produced through international collaboration is often more highly cited than other work, but is it also more novel? Using measures of conventionality and novelty developed by Uzzi et al. (2013) and replicated by Boyack and Klavans (2013), we test for novelty and conventionality in international research collaboration. Many studies have shown that international collaboration is more highly cited than national or sole-authored papers. Others have found that coauthored papers are more novel. Scholars have suggested that diverse groups have a greater chance of producing creative work. As such, we expected to find that international collaboration is also more novel. Using data from Web of Science and Scopus in 2005, we failed to show that international collaboration tends to produce more novel articles. In fact, international collaboration appeared to produce less novel and more conventional research. Transaction costs and the limits of global communication may be suppressing novelty, while an “audience effect” may be responsible for higher citation rates. Closer examination across the sciences, social sciences, and arts and humanities, as well as examination of six scientific specialties further illuminates the interplay of conventionality and novelty in work produced by international research teams.",
"title": ""
},
{
"docid": "208b4cb4dc4cee74b9357a5ebb2f739c",
"text": "We report improved AMR parsing results by adding a new action to a transitionbased AMR parser to infer abstract concepts and by incorporating richer features produced by auxiliary analyzers such as a semantic role labeler and a coreference resolver. We report final AMR parsing results that show an improvement of 7% absolute in F1 score over the best previously reported result. Our parser is available at: https://github.com/ Juicechuan/AMRParsing",
"title": ""
},
{
"docid": "24a117cf0e59591514dd8630bcd45065",
"text": "This work presents a coarse-grained distributed genetic algorithm (GA) for RNA secondary structure prediction. This research builds on previous work and contains two new thermodynamic models, INN and INN-HB, which add stacking-energies using base pair adjacencies. Comparison tests were performed against the original serial GA on known structures that are 122, 543, and 784 nucleotides in length on a wide variety of parameter settings. The effects of the new models are investigated, the predicted structures are compared to known structures and the GA is compared against a serial GA with identical models. Both algorithms perform well and are able to predict structures with high accuracy for short sequences.",
"title": ""
},
{
"docid": "e20f6ef6524a422c80544eaf590e326d",
"text": "Computing the semantic similarity/relatedness between terms is an important research area for several disciplines, including artificial intelligence, cognitive science, linguistics, psychology, biomedicine and information retrieval. These measures exploit knowledge bases to express the semantics of concepts. Some approaches, such as the information theoretical approaches, rely on knowledge structure, while others, such as the gloss-based approaches, use knowledge content. Firstly, based on structure, we propose a new intrinsic Information Content (IC) computing method which is based on the quantification of the subgraph formed by the ancestors of the target concept. Taxonomic measures including the IC-based ones consume the topological parameters that must be extracted from taxonomies considered as Directed Acyclic Graphs (DAGs). Accordingly, we propose a routine of graph algorithms that are able to provide some basic parameters, such as depth, ancestors, descendents, Lowest Common Subsumer (LCS). The IC-computing method is assessed using several knowledge structures which are: the noun and verb WordNet “is a” taxonomies, Wikipedia Category Graph (WCG), and MeSH taxonomy. We also propose an aggregation schema that exploits the WordNet “is a” taxonomy and WCG in a complementary way through the IC-based measures to improve coverage capacity. Secondly, taking content into consideration, we propose a gloss-based semantic similarity measure that operates based on the noun weighting mechanism using our IC-computing method, as well as on the WordNet, Wiktionary and Wikipedia resources. Further evaluation is performed on various items, including nouns, verbs, multiword expressions and biomedical datasets, using well-recognized benchmarks. The results indicate an improvement in terms of similarity and relatedness assessment accuracy.",
"title": ""
},
{
"docid": "350dc562863b8702208bfb41c6ceda6a",
"text": "THE use of formal devices for assessing function is becoming standard in agencies serving the elderly. In the Gerontological Society's recent contract study on functional assessment (Howell, 1968), a large assortment of rating scales, checklists, and other techniques in use in applied settings was easily assembled. The present state of the trade seems to be one in which each investigator or practitioner feels an inner compusion to make his own scale and to cry that other existent scales cannot possibly fit his own setting. The authors join this company in presenting two scales first standardized on their own population (Lawton, 1969). They take some comfort, however, in the fact that one scale, the Physical Self-Maintenance Scale (PSMS), is largely a scale developed and used by other investigators (Lowenthal, 1964), which was adapted for use in our own institution. The second of the scales, the Instrumental Activities of Daily Living Scale (IADL), taps a level of functioning heretofore inadequately represented in attempts to assess everyday functional competence. Both of the scales have been tested further for their usefulness in a variety of types of institutions and other facilities serving community-resident older people. Before describing in detail the behavior measured by these two scales, we shall briefly describe the schema of competence into which these behaviors fit (Lawton, 1969). Human behavior is viewed as varying in the degree of complexity required for functioning in a variety of tasks. The lowest level is called life maintenance, followed by the successively more complex levels of func-",
"title": ""
},
{
"docid": "cb0b7879f61630b467aa595d961bfcef",
"text": "UNLABELLED\nGlucagon-like peptide 1 (GLP-1[7-36 amide]) is an incretin hormone primarily synthesized in the lower gut (ileum, colon/rectum). Nevertheless, there is an early increment in plasma GLP-1 immediately after ingesting glucose or mixed meals, before nutrients have entered GLP-1 rich intestinal regions. The responsible signalling pathway between the upper and lower gut is not clear. It was the aim of this study to see, whether small intestinal resection or colonectomy changes GLP-1[7-36 amide] release after oral glucose. In eight healthy controls, in seven patients with inactive Crohn's disease (no surgery), in nine patients each after primarily jejunal or ileal small intestinal resections, and in six colonectomized patients not different in age (p = 0.10), body-mass-index (p = 0.24), waist-hip-ratio (p = 0.43), and HbA1c (p = 0.22), oral glucose tolerance tests (75 g) were performed in the fasting state. GLP-1[7-36 amide], insulin C-peptide, GIP and glucagon (specific (RIAs) were measured over 240 min.\n\n\nSTATISTICS\nRepeated measures ANOVA, t-test (significance: p < 0.05). A clear and early (peak: 15-30 min) GLP-1[7-36 amide] response was observed in all subjects, without any significant difference between gut-resected and control groups (p = 0.95). There were no significant differences in oral glucose tolerance (p = 0.21) or in the suppression of pancreatic glucagon (p = 0.36). Colonectomized patients had a higher insulin (p = 0.011) and C-peptide (p = 0.0023) response in comparison to all other groups. GIP responses also were higher in the colonectomized patients (p = 0.0005). Inactive Crohn's disease and resections of the small intestine as well as proctocolectomy did not change overall GLP-1[7-36 amide] responses and especially not the early increment after oral glucose. This may indicate release of GLP-1[7-36 amide] after oral glucose from the small number of GLP-1[7-36 amide] producing L-cells in the upper gut rather than from the main source in the ileum, colon and rectum. Colonectomized patients are characterized by insulin hypersecretion, which in combination with their normal oral glucose tolerance possibly indicates a reduced insulin sensitivity in this patient group. GIP may play a role in mediating insulin hypersecretion in these patients.",
"title": ""
},
{
"docid": "13a4dccde0ae401fc39b50469a0646b6",
"text": "The stability theorem for persistent homology is a central result in topological data analysis. While the original formulation of the result concerns the persistence barcodes of R-valued functions, the result was later cast in a more general algebraic form, in the language of persistence modules and interleavings. In this paper, we establish an analogue of this algebraic stability theorem for zigzag persistence modules. To do so, we functorially extend each zigzag persistence module to a two-dimensional persistence module, and establish an algebraic stability theorem for these extensions. One part of our argument yields a stability result for free two-dimensional persistence modules. As an application of our main theorem, we strengthen a result of Bauer et al. on the stability of the persistent homology of Reeb graphs. Our main result also yields an alternative proof of the stability theorem for level set persistent homology of Carlsson et al.",
"title": ""
},
{
"docid": "349a9374e3ff6c068f26c0a1b0dfe3a2",
"text": "Heart failure (HF) is a growing healthcare burden and one of the leading causes of hospitalizations and readmission. Preventing readmissions for HF patients is an increasing priority for clinicians, researchers, and various stakeholders. The following review will discuss the interventions found to reduce readmissions for patients and improve hospital performance on the 30-day readmission process measure. While evidence-based therapies for HF management have proliferated, the consistent implementation of these therapies and development of new strategies to more effectively prevent readmissions remain areas for continued improvement.",
"title": ""
},
{
"docid": "097912a74fbc55ba7909b6e0622c0b42",
"text": "Many ubiquitous computing applications involve human activity recognition based on wearable sensors. Although this problem has been studied for a decade, there are a limited number of publicly available datasets to use as standard benchmarks to compare the performance of activity models and recognition algorithms. In this paper, we describe the freely available USC human activity dataset (USC-HAD), consisting of well-defined low-level daily activities intended as a benchmark for algorithm comparison particularly for healthcare scenarios. We briefly review some existing publicly available datasets and compare them with USC-HAD. We describe the wearable sensors used and details of dataset construction. We use high-precision well-calibrated sensing hardware such that the collected data is accurate, reliable, and easy to interpret. The goal is to make the dataset and research based on it repeatable and extendible by others.",
"title": ""
},
{
"docid": "9c5d3f89d5207b42d7e2c8803b29994c",
"text": "With the advent of data mining, machine learning has come of age and is now a critical technology in many businesses. However, machine learning evolved in a different research context to that in which it now finds itself employed. A particularly important problem in the data mining world is working effectively with large data sets. However, most machine learning research has been conducted in the context of learning from very small data sets. To date most approaches to scaling up machine learning to large data sets have attempted to modify existing algorithms to deal with large data sets in a more computationally efficient and effective manner. But is this necessarily the best method? This paper explores the possibility of designing algorithms specifically for large data sets. Specifically, the paper looks at how increasing data set size affects bias and variance error decompositions for classification algorithms. Preliminary results of experiments to determine these effects are presented, showing that, as hypothesised variance can be expected to decrease as training set size increases. No clear effect of training set size on bias was observed. These results have profound implications for data mining from large data sets, indicating that developing effective learning algorithms for large data sets is not simply a matter of finding computationally efficient variants of existing learning algorithms.",
"title": ""
},
{
"docid": "223a7496c24dcf121408ac3bba3ad4e5",
"text": "Process control and SCADA systems, with their reliance on proprietary networks and hardware, have long been considered immune to the network attacks that have wreaked so much havoc on corporate information systems. Unfortunately, new research indicates this complacency is misplaced – the move to open standards such as Ethernet, TCP/IP and web technologies is letting hackers take advantage of the control industry’s ignorance. This paper summarizes the incident information collected in the BCIT Industrial Security Incident Database (ISID), describes a number of events that directly impacted process control systems and identifies the lessons that can be learned from these security events.",
"title": ""
},
{
"docid": "e8edb58537ada97ee5da365fa096ae2d",
"text": "In this paper, we present a novel semi-supervised learning framework based on `1 graph. The `1 graph is motivated by that each datum can be reconstructed by the sparse linear superposition of the training data. The sparse reconstruction coefficients, used to deduce the weights of the directed `1 graph, are derived by solving an `1 optimization problem on sparse representation. Different from conventional graph construction processes which are generally divided into two independent steps, i.e., adjacency searching and weight selection, the graph adjacency structure as well as the graph weights of the `1 graph is derived simultaneously and in a parameter-free manner. Illuminated by the validated discriminating power of sparse representation in [16], we propose a semi-supervised learning framework based on `1 graph to utilize both labeled and unlabeled data for inference on a graph. Extensive experiments on semi-supervised face recognition and image classification demonstrate the superiority of our proposed semi-supervised learning framework based on `1 graph over the counterparts based on traditional graphs.",
"title": ""
},
{
"docid": "74ccb28a31d5a861bea1adfaab2e9bf1",
"text": "For many decades CMOS devices have been successfully scaled down to achieve higher speed and increased performance of integrated circuits at lower cost. Today’s charge-based CMOS electronics encounters two major challenges: power dissipation and variability. Spintronics is a rapidly evolving research and development field, which offers a potential solution to these issues by introducing novel ‘more than Moore’ devices. Spin-based magnetoresistive random-access memory (MRAM) is already recognized as one of the most promising candidates for future universal memory. Magnetic tunnel junctions, the main elements of MRAM cells, can also be used to build logic-in-memory circuits with non-volatile storage elements on top of CMOS logic circuits, as well as versatile compact on-chip oscillators with low power consumption. We give an overview of CMOS-compatible spintronics applications. First, we present a brief introduction to the physical background considering such effects as magnetoresistance, spin-transfer torque (STT), spin Hall effect, and magnetoelectric effects. We continue with a comprehensive review of the state-of-the-art spintronic devices for memory applications (STT-MRAM, domain wallmotion MRAM, and spin–orbit torque MRAM), oscillators (spin torque oscillators and spin Hall nano-oscillators), logic (logic-in-memory, all-spin logic, and buffered magnetic logic gate grid), sensors, and random number generators. Devices with different types of resistivity switching are analyzed and compared, with their advantages highlighted and challenges revealed. CMOScompatible spintronic devices are demonstrated beginning with predictive simulations, proceeding to their experimental confirmation and realization, and finalized by the current status of application in modern integrated systems and circuits. We conclude the review with an outlook, where we share our vision on the future applications of the prospective devices in the area.",
"title": ""
},
{
"docid": "1278d0b3ea3f06f52b2ec6b20205f8d0",
"text": "The future global Internet is going to have to cater to users that will be largely mobile. Mobility is one of the main factors affecting the design and performance of wireless networks. Mobility modeling has been an active field for the past decade, mostly focusing on matching a specific mobility or encounter metric with little focus on matching protocol performance. This study investigates the adequacy of existing mobility models in capturing various aspects of human mobility behavior (including communal behavior), as well as network protocol performance. This is achieved systematically through the introduction of a framework that includes a multi-dimensional mobility metric space. We then introduce COBRA, a new mobility model capable of spanning the mobility metric space to match realistic traces. A methodical analysis using a range of protocol (epidemic, spraywait, Prophet, and Bubble Rap) dependent and independent metrics (modularity) of various mobility models (SMOOTH and TVC) and traces (university campuses, and theme parks) is done. Our results indicate significant gaps in several metric dimensions between real traces and existing mobility models. Our findings show that COBRA matches communal aspect and realistic protocol performance, reducing the overhead gap (w.r.t existing models) from 80% to less than 12%, showing the efficacy of our framework.",
"title": ""
},
{
"docid": "98cd53e6bf758a382653cb7252169d22",
"text": "We introduce a novel malware detection algorithm based on the analysis of graphs constructed from dynamically collected instruction traces of the target executable. These graphs represent Markov chains, where the vertices are the instructions and the transition probabilities are estimated by the data contained in the trace. We use a combination of graph kernels to create a similarity matrix between the instruction trace graphs. The resulting graph kernel measures similarity between graphs on both local and global levels. Finally, the similarity matrix is sent to a support vector machine to perform classification. Our method is particularly appealing because we do not base our classifications on the raw n-gram data, but rather use our data representation to perform classification in graph space. We demonstrate the performance of our algorithm on two classification problems: benign software versus malware, and the Netbull virus with different packers versus other classes of viruses. Our results show a statistically significant improvement over signature-based and other machine learning-based detection methods.",
"title": ""
},
{
"docid": "41e9dac7301e00793c6e4891e07b53fa",
"text": "We present an intriguing property of visual data that we observe in our attempt to isolate the influence of data for learning a visual representation. We observe that we can get better performance than existing model by just conditioning the existing representation on a million unlabeled images without any extra knowledge. As a by-product of this study, we achieve results better than prior state-of-theart for surface normal estimation on NYU-v2 depth dataset, and improved results for semantic segmentation using a selfsupervised representation on PASCAL-VOC 2012 dataset.",
"title": ""
},
{
"docid": "ce9238236040aed852b1c8f255088b61",
"text": "This paper proposes a high efficiency LLC resonant inverter for induction heating applications by using asymmetrical voltage cancellation control. The proposed control method is implemented in a full-bridge topology for induction heating application. The operating frequency is automatically adjusted to maintain a small constant lagging phase angle under load parameter variation. The output power is controlled using the asymmetrical voltage cancellation technique. The LLC resonant tank is designed without the use of output transformer. This results in an increase of the net efficiency of the induction heating system. The validity of the proposed method is verified through computer simulation and hardware experiment at the operating frequency of 93 to 96 kHz.",
"title": ""
}
] | scidocsrr |
64369bf5f3f3924cce8fb7f37cc9b129 | Understanding symmetries in deep networks | [
{
"docid": "10c357d046dbf27cab92b1c3f91affb1",
"text": "We propose a novel deep architecture, SegNet, for semantic pixel wise image labelling 1. SegNet has several attractive properties; (i) it only requires forward evaluation of a fully learnt function to obtain smooth label predictions, (ii) with increasing depth, a larger context is considered for pixel labelling which improves accuracy, and (iii) it is easy to visualise the effect of feature activation(s) in the pixel label space at any depth. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. The decoders help map low resolution feature maps at the output of the encoder stack to full input image size feature maps. This addresses an important drawback of recent deep learning approaches which have adopted networks designed for object categorization for pixel wise labelling. These methods lack a mechanism to map deep layer feature maps to input dimensions. They resort to ad hoc methods to upsample features, e.g. by replication. This results in noisy predictions and also restricts the number of pooling layers in order to avoid too much upsampling and thus reduces spatial context. SegNet overcomes these problems by learning to map encoder outputs to image pixel labels. We test the performance of SegNet on outdoor RGB scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results show that SegNet achieves state-of-the-art performance even without use of additional cues such as depth, video frames or post-processing with CRF models.",
"title": ""
},
{
"docid": "60ea2144687d867bb4f6b21e792a8441",
"text": "Stochastic gradient descent is a simple approach to find the local minima of a cost function whose evaluations are corrupted by noise. In this paper, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and is illustrated here by four examples. In particular a novel gossip algorithm on the set of covariance matrices is derived and tested numerically.",
"title": ""
}
] | [
{
"docid": "48168ed93d710d3b85b7015f2c238094",
"text": "ion and hierarchical information processing are hallmarks of human and animal intelligence underlying the unrivaled flexibility of behavior in biological systems. Achieving such flexibility in artificial systems is challenging, even with more and more computational power. Here, we investigate the hypothesis that abstraction and hierarchical information processing might in fact be the consequence of limitations in information-processing power. In particular, we study an information-theoretic framework of bounded rational decision-making that trades off utility maximization against information-processing costs. We apply the basic principle of this framework to perception-action systems with multiple information-processing nodes and derive bounded-optimal solutions. We show how the formation of abstractions and decision-making hierarchies depends on information-processing costs. We illustrate the theoretical ideas with example simulations and conclude by formalizing a mathematically unifying optimization principle that could potentially be extended to more complex systems.",
"title": ""
},
{
"docid": "abb748541b980385e4b8bc477c5adc0e",
"text": "Spin–orbit torque, a torque brought about by in-plane current via the spin–orbit interactions in heavy-metal/ferromagnet nanostructures, provides a new pathway to switch the magnetization direction. Although there are many recent studies, they all build on one of two structures that have the easy axis of a nanomagnet lying orthogonal to the current, that is, along the z or y axes. Here, we present a new structure with the third geometry, that is, with the easy axis collinear with the current (along the x axis). We fabricate a three-terminal device with a Ta/CoFeB/MgO-based stack and demonstrate the switching operation driven by the spin–orbit torque due to Ta with a negative spin Hall angle. Comparisons with different geometries highlight the previously unknown mechanisms of spin–orbit torque switching. Our work offers a new avenue for exploring the physics of spin–orbit torque switching and its application to spintronics devices.",
"title": ""
},
{
"docid": "0b1baa3190abb39284f33b8e73bcad1d",
"text": "Despite significant advances in machine learning and perception over the past few decades, perception algorithms can still be unreliable when deployed in challenging time-varying environments. When these systems are used for autonomous decision-making, such as in self-driving vehicles, the impact of their mistakes can be catastrophic. As such, it is important to characterize the performance of the system and predict when and where it may fail in order to take appropriate action. While similar in spirit to the idea of introspection, this work introduces a new paradigm for predicting the likely performance of a robot’s perception system based on past experience in the same workspace. In particular, we propose two models that probabilistically predict perception performance from observations gathered over time. While both approaches are place-specific, the second approach additionally considers appearance similarity when incorporating past observations. We evaluate our method in a classical decision-making scenario in which the robot must choose when and where to drive autonomously in 60 km of driving data from an urban environment. Results demonstrate that both approaches lead to fewer false decisions (in terms of incorrectly offering or denying autonomy) for two different detector models, and show that leveraging visual appearance within a state-of-the-art navigation framework increases the accuracy of our performance predictions.",
"title": ""
},
{
"docid": "4b057d86825e346291d675e0c1285fad",
"text": "We describe theclipmap, a dynamic texture representation that efficiently caches textures of arbitrarily large size in a finite amount of physical memory for rendering at real-time rates. Further, we describe a software system for managing clipmaps that supports integration into demanding real-time applications. We show the scale and robustness of this integrated hardware/software architecture by reviewing an application virtualizing a 170 gigabyte texture at 60 Hertz. Finally, we suggest ways that other rendering systems may exploit the concepts underlying clipmaps to solve related problems. CR",
"title": ""
},
{
"docid": "9e50093d32e0a8c6ab40b1eb2c063a04",
"text": "Credit card fraud detection is a very challenging problem because of the specific nature of transaction data and the labeling process. The transaction data are peculiar because they are obtained in a streaming fashion, and they are strongly imbalanced and prone to non-stationarity. The labeling is the outcome of an active learning process, as every day human investigators contact only a small number of cardholders (associated with the riskiest transactions) and obtain the class (fraud or genuine) of the related transactions. An adequate selection of the set of cardholders is therefore crucial for an efficient fraud detection process. In this paper, we present a number of active learning strategies and we investigate their fraud detection accuracies. We compare different criteria (supervised, semi-supervised and unsupervised) to query unlabeled transactions. Finally, we highlight the existence of an exploitation/exploration trade-off for active learning in the context of fraud detection, which has so far been overlooked in the literature.",
"title": ""
},
{
"docid": "c86fbf52aecb41ce4f3d806f62965c50",
"text": "Multi-core end-systems use Receive Side Scaling (RSS) to parallelize protocol processing. RSS uses a hash function on the standard flow descriptors and an indirection table to assign incoming packets to receive queues which are pinned to specific cores. This ensures flow affinity in that the interrupt processing of all packets belonging to a specific flow is processed by the same core. A key limitation of standard RSS is that it does not consider the application process that consumes the incoming data in determining the flow affinity. In this paper, we carry out a detailed experimental analysis of the performance impact of the application affinity in a 40 Gbps testbed network with a dual hexa-core end-system. We show, contrary to conventional wisdom, that when the application process and the flow are affinitized to the same core, the performance (measured in terms of end-to-end TCP throughput) is significantly lower than the line rate. Near line rate performance is observed when the flow and the application process are affinitized to different cores belonging to the same socket. Furthermore, affinitizing the application and the flow to cores on different sockets results in significantly lower throughput than the line rate. These results arise due to the memory bottleneck, which is demonstrated using preliminary correlational data on the cache hit rate in the core that services the application process.",
"title": ""
},
{
"docid": "69eceabd9967260cbdec56d02bcafd83",
"text": "A modified Vivaldi antenna is proposed in this paper especially for the millimeter-wave application. The metal support frame is used to fix the structured substrate and increased the front-to-back ratio as well as the radiation gain. Detailed design process are presented, following which one sample is designed with its working frequency band from 75GHz to 150 GHz. The sample is also fabricated and measured. Good agreements between simulated results and measured results are obtained.",
"title": ""
},
{
"docid": "db5865f8f8701e949a9bb2f41eb97244",
"text": "This paper proposes a method for constructing local image descriptors which efficiently encode texture information and are suitable for histogram based representation of image regions. The method computes a binary code for each pixel by linearly projecting local image patches onto a subspace, whose basis vectors are learnt from natural images via independent component analysis, and by binarizing the coordinates in this basis via thresholding. The length of the binary code string is determined by the number of basis vectors. Image regions can be conveniently represented by histograms of pixels' binary codes. Our method is inspired by other descriptors which produce binary codes, such as local binary pattern and local phase quantization. However, instead of heuristic code constructions, the proposed approach is based on statistics of natural images and this improves its modeling capacity. The experimental results show that our method improves accuracy in texture recognition tasks compared to the state-of-the-art.",
"title": ""
},
{
"docid": "682686007186f8af85f2eb27b49a2df5",
"text": "In the last few years, deep learning has lead to very good performance on a variety of problems, such as object recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networks have been most extensively studied. Due to the lack of training data and computing power in early days, it is hard to train a large high-capacity convolutional neural network without overfitting. Recently, with the rapid growth of data size and the increasing power of graphics processor unit, many researchers have improved the convolutional neural networks and achieved state-of-the-art results on various tasks. In this paper, we provide a broad survey of the recent advances in convolutional neural networks. Besides, we also introduce some applications of convolutional neural networks in computer vision.",
"title": ""
},
{
"docid": "82b8f70d4705caae5d334b721a8e2c7e",
"text": "This paper presents the design concept, models, and open-loop control of a particular form of a variablereluctance spherical motor (VRSM), referred here as a spherical wheel motor (SWM). Unlike existing spherical motors where design focuses have been on controlling the three degrees of freedom (DOF) angular displacements, the SWM offers a means to control the orientation of a continuously rotating shaft in an open-loop (OL) fashion. We provide a formula for deriving different switching sequences (full step and fractional step) for a specified current magnitude and pole configurations. The concept feasibility of an OL controlled SWM has been experimentally demonstrated on a prototype that has 8 rotor permanent-magnet (PM) pole-pairs and 10 stator electromagnet (EM) pole-pairs.",
"title": ""
},
{
"docid": "712d292b38a262a8c37679c9549a631d",
"text": "Addresses for correspondence: Dr Sara de Freitas, London Knowledge Lab, Birkbeck College, University of London, 23–29 Emerald Street, London WC1N 3QS. UK. Tel: +44(0)20 7763 2117; fax: +44(0)20 7242 2754; email: sara@lkl.ac.uk. Steve Jarvis, Vega Group PLC, 2 Falcon Way, Shire Park, Welwyn Garden City, Herts AL7 1TW, UK. Tel: +44 (0)1707 362602; Fax: +44 (0)1707 393909; email: steve.jarvis@vega.co.uk",
"title": ""
},
{
"docid": "4681e8f07225e305adfc66cd1b48deb8",
"text": "Collaborative work among students, while an important topic of inquiry, needs further treatment as we still lack the knowledge regarding obstacles that students face, the strategies they apply, and the relations among personal and group aspects. This article presents a diary study of 54 master’s students conducting group projects across four semesters. A total of 332 diary entries were analysed using the C5 model of collaboration that incorporates elements of communication, contribution, coordination, cooperation and collaboration. Quantitative and qualitative analyses show how these elements relate to one another for students working on collaborative projects. It was found that face-to-face communication related positively with satisfaction and group dynamics, whereas online chat correlated positively with feedback and closing the gap. Managing scope was perceived to be the most common challenge. The findings suggest the varying affordances and drawbacks of different methods of communication, collaborative work styles and the strategies of group members.",
"title": ""
},
{
"docid": "59323291555a82ef99013bd4510b3020",
"text": "This paper aims to classify and analyze recent as well as classic image registration techniques. Image registration is the process of super imposing images of the same scene taken at different times, location and by different sensors. It is a key enabling technology in medical image analysis for integrating and analyzing information from various modalities. Basically image registration finds temporal correspondences between the set of images and uses transformation model to infer features from these correspondences.The approaches for image registration can beclassified according to their nature vizarea-based and feature-based and dimensionalityvizspatial domain and frequency domain. The procedure of image registration by intensity based model, spatial domain transform, Rigid transform and Non rigid transform based on the above mentioned classification has been performed and the eminence of image is measured by the three quality parameters such as SNR, PSNR and MSE. The techniques have been implemented and inferred thatthe non-rigid transform exhibit higher perceptual quality and offer visually sharper image than other techniques.Problematic issues of image registration techniques and outlook for the future research are discussed. This work may be one of the comprehensive reference sources for the researchers involved in image registration.",
"title": ""
},
{
"docid": "26fad325410424982d29577e49797159",
"text": "How do the statements made by people in online political discussions affect other people's willingness to express their own opinions, or argue for them? And how does group interaction ultimately shape individual opinions? We examine carefully whether and how patterns of group discussion shape (a) individuals' expressive behavior within those discussions and (b) changes in personal opinions. This research proposes that the argumentative \"climate\" of group opinion indeed affects postdiscussion opinions, and that a primary mechanism responsible for this effect is an intermediate influence on individual participants' own expressions during the online discussions. We find support for these propositions in data from a series of 60 online group discussions, involving ordinary citizens, about the tax plans offered by rival U.S. presidential candidates George W. Bush and Al Gore in 2000. This journal article is available at ScholarlyCommons: http://repository.upenn.edu/asc_papers/99 Normative and Informational Influences in Online Political Discussions Vincent Price, Lilach Nir, & Joseph N. Cappella 1 Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA 19104 6220 2 Department of Communication and the Department of Political Science, Hebrew University of Jerusalem, Jerusalem, Israel, 91905 How do the statements made by people in online political discussions affect other peo ple’s willingness to express their own opinions, or argue for them? And how does group interaction ultimately shape individual opinions? We examine carefully whether and how patterns of group discussion shape (a) individuals’ expressive behavior within those discussions and (b) changes in personal opinions. This research proposes that the argu mentative ‘‘climate’’ of group opinion indeed affects postdiscussion opinions, and that a primary mechanism responsible for this effect is an intermediate influence on individ ual participants’ own expressions during the online discussions. We find support for these propositions in data from a series of 60 online group discussions, involving ordi nary citizens, about the tax plans offered by rival U.S. presidential candidates George W. Bush and Al Gore in 2000. Investigations of social influence and public opinion go hand in hand. Opinions may exist as psychological phenomena in individual minds, but the processes that shape these opinions—at least, public opinions—are inherently social–psychological. The notion that group interaction can influence individual opinions is widely accepted. Indeed, according to many participatory theories of democracy, lively exchanges among citizens are deemed central to the formation of sound or ‘‘true’’ public opinion, which is forged in the fire of group discussion. This truly public opinion is commonly contrasted with mass or ‘‘pseudo’’-opinion developed in isolation by disconnected media consumers responding individually to the news (e.g., Blumer, 1946; Fishkin, 1991, 1995; Graber, 1982). Although discussion is celebrated in democratic theory as a critical element of proper opinion formation, it also brings with it a variety of potential downsides. These include a possible tyranny of the majority (e.g., de Tocqueville, 1835/1945), distorted expression of opinions resulting from fear of social isolation (Noelle-Neumann, 1984), or shifts of opinion to more extreme positions than most individuals might actually prefer (see, e.g., Janis, 1972, on dangerous forms of ‘‘group think,’’ or more recently Sunstein, 2001, on the polarizing effects of ‘‘enclave’’ communication on the Web). The problem of how to foster productive social interaction while avoiding potential dysfunctions of group influence has occupied a large place in normative writings on public opinion and democracy. Modern democracies guarantee freedom of association and public expression; they also employ systems and procedures aimed at protecting collective decision making from untoward social pressure, including not only the use of secret ballots in elections but also more generally republican legislatures and executive and judicial offices that by design are insulated from too much democracy, that is, from direct popular control (e.g., Madison, 1788/1966). However, steady advances in popular education and growth of communication media have enlarged expectations of the ordinary citizen and brought calls for more direct, popular participation in government. In particular, dramatic technological changes over the past several decades—and especially the rise of interactive forms of electronic communication enabled by the Internet and World Wide Web—have fueled hopes for new, expansive, and energized forms of ‘‘teledemocracy’’ (e.g., Arterton, 1987). Online political discussion is thus of considerable interest to students of public opinion and political communication. It has been credited with creating vital spaces for public conversation, opening in a new ‘‘public sphere’’ of the sort envisioned by Habermas (1962/1989), (see, e.g., Papacharissi, 2004; Poor, 2005; Poster, 1997). Though still not a routine experience for citizens, it has been steadily growing in prevalence and likely import for popular opinion formation. Recent surveys indicate that close to a third of Internet users regularly engage with groups online, with nearly 10% reporting that they joined online discussions about the 2004 presidential election (Pew Research Center, 2005). Online political discussion offers new and potentially quite powerful modes of scientific observation as well. Despite continuous methodological improvements, the mainstay of public opinion research, the general-population survey, has always consisted of randomly sampled, one-on-one, respondent-to-interviewer ‘‘conversations’’ aimed at extracting precoded responses or short verbal answers to structured questionnaires. Web-based technologies, however, may now permit randomly constituted respondent-withrespondent group conversations. The conceptual fit between such conversations and the phenomenon of public opinion, itself grounded in popular discussion, renders it quite appealing. Developments in electronic data storage and retrieval, and telecommunication networks of increasing channel capacity, now make possible an integration of general-population survey techniques and more qualitative research approaches, such as focus group methods, that have become popular in large part owing to the sense that they offer a more refined understanding of popular thought than might be gained from structured surveys (e.g., Morgan, 1997). Perhaps most important, the study of online discussion opens new theoretical avenues for public opinion research. Understanding online citizen interactions calls for bringing together several strands of theory in social psychology, smallgroup decision making, and political communication that have heretofore been disconnected (Price, 1992). Social influence in opinion formation Certainly, the most prominent theory of social influence in public opinion research has been Noelle-Neumann’s (1984) spiral of silence. Citing early research on group conformity processes, such as that of Asch (1956), Noelle-Neumann argued that media depictions of the normative ‘‘climate of opinion’’ have a silencing effect on those who hold minority viewpoints. The reticence of minorities to express their views contributes to the appearance of a solid majority opinion, which, in turn, produces a spiral of silence that successively emboldens the majority and enervates the minority. Meta-analytic evaluations of research on the hypothetical silencing effect of the mediated climate of opinion suggest that such effects, if they indeed exist, appear to be fairly small (Glynn, Hayes, & Shanahan, 1997); nevertheless, the theory has garnered considerable empirical attention and remains influential. In experimental social psychology, group influence has been the object of systematic study for over half a century. Although no single theoretical framework is available for explaining how social influence operates, some important organizing principles and concepts have emerged over time (Price & Oshagan, 1995). One of the most useful heuristics, proposed by Deutsch and Gerard (1955), distinguishes two broad forms of social influence (see also Kaplan & Miller, 1987). Normative social influence occurs when someone is motivated by a desire to conform to the positive expectations of other people. Motivations for meeting these normative expectations lie in the various rewards that might accrue (self-esteem or feelings of social approval) or possible negative sanctions that might result from deviant behavior (alienation, excommunication, or social isolation). Normative social influence is clearly the basis of Noelle-Neumann’s (1984) theorizing about minorities silencing themselves in the face of majority pressure. Informational social influence, in contrast, occurs when people accept the words, opinions, and deeds of others as valid evidence about reality. People learn about the world, in part, from discovering that they disagree (e.g., Burnstein & Vinokur, 1977; Vinokur & Burnstein, 1974). They are influenced by groups not only because of group norms, but also because of arguments that arise in groups, through a comparison of their views to those expressed by others (see also the distinction between normative and comparative functions of reference groups in sociology, e.g., Hyman & Singer, 1968; Kelley, 1952). Although the distinction between informational and normative influence has proven useful and historically important in small-group research, it can become cloudy in many instances. This is so because normative pressure and persuasive information operate in similar ways within groups, and often with similar effects. For example, the tendency of groups to polarize—that is, to move following discussion to extreme positions in the direction that group members were initially inc",
"title": ""
},
{
"docid": "9b942a1342eb3c4fd2b528601fa42522",
"text": "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.",
"title": ""
},
{
"docid": "e52f5174a9d5161e18eced6e2eb36684",
"text": "The clinical use of ivabradine has and continues to evolve along channels that are predicated on its mechanism of action. It selectively inhibits the funny current (If) in sinoatrial nodal tissue, resulting in a decrease in the rate of diastolic depolarization and, consequently, the heart rate, a mechanism that is distinct from those of other negative chronotropic agents. Thus, it has been evaluated and is used in select patients with systolic heart failure and chronic stable angina without clinically significant adverse effects. Although not approved for other indications, ivabradine has also shown promise in the management of inappropriate sinus tachycardia. Here, the authors review the mechanism of action of ivabradine and salient studies that have led to its current clinical indications and use.",
"title": ""
},
{
"docid": "2e66317dfe4005c069ceac2d4f9e3877",
"text": "The Semantic Web presents the vision of a distributed, dynamically growing knowledge base founded on formal logic. Common users, however, seem to have problems even with the simplest Boolean expression. As queries from web search engines show, the great majority of users simply do not use Boolean expressions. So how can we help users to query a web of logic that they do not seem to understand? We address this problem by presenting Ginseng, a quasi natural language guided query interface to the Semantic Web. Ginseng relies on a simple question grammar which gets dynamically extended by the structure of an ontology to guide users in formulating queries in a language seemingly akin to English. Based on the grammar Ginseng then translates the queries into a Semantic Web query language (RDQL), which allows their execution. Our evaluation with 20 users shows that Ginseng is extremely simple to use without any training (as opposed to any logic-based querying approach) resulting in very good query performance (precision = 92.8%, recall = 98.4%). We, furthermore, found that even with its simple grammar/approach Ginseng could process over 40% of questions from a query corpus without modification.",
"title": ""
},
{
"docid": "41b712d0d485c65a8dff32725c215f97",
"text": "In this article, we present a novel, multi-user, virtual reality environment for the interactive, collaborative 3D analysis of large 3D scans and the technical advancements that were necessary to build it: a multi-view rendering system for large 3D point clouds, a suitable display infrastructure, and a suite of collaborative 3D interaction techniques. The cultural heritage site of Valcamonica in Italy with its large collection of prehistoric rock-art served as an exemplary use case for evaluation. The results show that our output-sensitive level-of-detail rendering system is capable of visualizing a 3D dataset with an aggregate size of more than 14 billion points at interactive frame rates. The system design in this exemplar application results from close exchange with a small group of potential users: archaeologists with expertise in rockart. The system allows them to explore the prehistoric art and its spatial context with highly realistic appearance. A set of dedicated interaction techniques was developed to facilitate collaborative visual analysis. A multi-display workspace supports the immediate comparison of geographically distributed artifacts. An expert review of the final demonstrator confirmed the potential for added value in rock-art research and the usability of our collaborative interaction techniques.",
"title": ""
},
{
"docid": "34f7497eaae4a6b56089889781935263",
"text": "The research on two-wheeled inverted pendulum (T-WIP) mobile robots or commonly known as balancing robots have gained momentum over the last decade in a number of robotic laboratories around the world (Solerno & Angeles, 2003;Grasser et al., 2002; Solerno & Angeles, 2007;Koyanagi, Lida & Yuta, 1992;Ha & Yuta, 1996; Kim, Kim & Kwak, 2003). This chapter describes the hardware design of such a robot. The objective of the design is to develop a T-WIP mobile robot as well as MATLABTM interfacing configuration to be used as flexible platform which comprises of embedded unstable linear plant intended for research and teaching purposes. Issues such as selection of actuators and sensors, signal processing units, MATLABTM Real Time Workshop coding, modeling and control scheme is addressed and discussed. The system is then tested using a well-known state feedback controller to verify its functionality.",
"title": ""
}
] | scidocsrr |
095438f5ab742de58bfbc27df8cef909 | Topology-independent 3D garment fitting for virtual clothing | [
{
"docid": "5b8c43561c322a6e85fcffc6e4ca08db",
"text": "In this paper we address the problem of rapid distance computation between rigid objects and highly deformable objects, which is important in the context of physically based modeling of e.g hair or clothing. Our method is in particular useful when modeling deformable objects with particle systems—the most common approach to simulate such objects. We combine some already known techniques about distance fields into an algorithm for rapid collision detection. Only the rigid objects of an environment are represented by distance fields. In the context of proximity queries, which are essential for proper collision detection, this representation has two main advantages: First, any given boundary representation can be approximated quite easily, no high-degree polynomials or complicated approximation algorithms are needed. Second, the evaluation of distances and normals needed for collision response is extremely fast and independent of the complexity of the object. In the course of the paper we propose a simple, but fast algorithm for partial distance field computation. The sources are triangular meshes. Then, we present our approach for collision detection in detail. Examples from an interactive cloth animation system show the advantages of our approach in practice. We conclude that our method allows real-time animations of complex deformable objects in non-trivial environments on standard PC hardware.",
"title": ""
}
] | [
{
"docid": "962bc645d5a6a6644d28599948a18df0",
"text": "The demand for computer-assisted game study in sports is growing dramatically. This paper presents a practical video analysis system to facilitate semantic content understanding. A physics-based algorithm is designed for ball tracking and 3D trajectory reconstruction in basketball videos and shooting location statistics can be obtained. The 2D-to-3D inference is intrinsically a challenging problem due to the loss of 3D information in projection to 2D frames. One significant contribution of the proposed system lies in the integrated scheme incorporating domain knowledge and physical characteristics of ball motion into object tracking to overcome the problem of 2D-to-3D inference. With the 2D trajectory extracted and the camera parameters calibrated, physical characteristics of ball motion are involved to reconstruct the 3D trajectories and estimate the shooting locations. Our experiments on broadcast basketball videos show promising results. We believe the proposed system will greatly assist intelligence collection and statistics analysis in basketball games. 2008 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "fbc8d5a6a4299eaf0bf13d7d8c580bd1",
"text": "The lexical semantic system is an important component of human language and cognitive processing. One approach to modeling semantic knowledge makes use of hand-constructed networks or trees of interconnected word senses (Miller, Beckwith, Fellbaum, Gross, & Miller, 1990; Jarmasz & Szpakowicz, 2003). An alternative approach seeks to model word meanings as high-dimensional vectors, which are derived from the cooccurrence of words in unlabeled text corpora (Landauer & Dumais, 1997; Burgess & Lund, 1997a). This paper introduces a new vector-space method for deriving word-meanings from large corpora that was inspired by the HAL and LSA models, but which achieves better and more consistent results in predicting human similarity judgments. We explain the new model, known as COALS, and how it relates to prior methods, and then evaluate the various models on a range of tasks, including a novel set of semantic similarity ratings involving both semantically and morphologically related terms.",
"title": ""
},
{
"docid": "e35b4a46ccd73aa79246f09b86e01c24",
"text": "Emotion detection can considerably enhance our understanding of users’ emotional states. Understanding users’ emotions especially in a real-time setting can be pivotal in improving user interactions and understanding their preferences. In this paper, we propose a constraint optimization framework to discover emotions from social media content of the users. Our framework employs several novel constraints such as emotion bindings, topic correlations, along with specialized features proposed by prior work and well-established emotion lexicons. We propose an efficient inference algorithm and report promising empirical results on three diverse datasets.",
"title": ""
},
{
"docid": "6fd1e9896fc1aaa79c769bd600d9eac3",
"text": "In future planetary exploration missions, rovers will be required to autonomously traverse challenging environments. Much of the previous work in robot motion planning cannot be successfully applied to the rough-terrain planning problem. A model-based planning method is presented in this paper that is computationally efficient and takes into account uncertainty in the robot model, terrain model, range sensor data, and rover pathfollowing errors. It is based on rapid path planning through the visible terrain map with a simple graph-search algorithm, followed by a physics-based evaluation of the path with a rover model. Simulation results are presented which demonstrate the method’s effectiveness.",
"title": ""
},
{
"docid": "07f7a4fe69f6c4a1180cc3ca444a363a",
"text": "With the popularization of IoT (Internet of Things) devices and the continuous development of machine learning algorithms, learning-based IoT malicious traffic detection technologies have gradually matured. However, learning-based IoT traffic detection models are usually very vulnerable to adversarial samples. There is a great need for an automated testing framework to help security analysts to detect errors in learning-based IoT traffic detection systems. At present, most methods for generating adversarial samples require training parameters of known models and are only applicable to image data. To address the challenge, we propose a testing framework for learning-based IoT traffic detection systems, TLTD. By introducing genetic algorithms and some technical improvements, TLTD can generate adversarial samples for IoT traffic detection systems and can perform a black-box test on the systems.",
"title": ""
},
{
"docid": "540a6dd82c7764eedf99608359776e66",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aea.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "5a5b30b63944b92b168de7c17d5cdc5e",
"text": "We introduce the Densely Segmented Supermarket (D2S) dataset, a novel benchmark for instance-aware semantic segmentation in an industrial domain. It contains 21 000 high-resolution images with pixel-wise labels of all object instances. The objects comprise groceries and everyday products from 60 categories. The benchmark is designed such that it resembles the real-world setting of an automatic checkout, inventory, or warehouse system. The training images only contain objects of a single class on a homogeneous background, while the validation and test sets are much more complex and diverse. To further benchmark the robustness of instance segmentation methods, the scenes are acquired with different lightings, rotations, and backgrounds. We ensure that there are no ambiguities in the labels and that every instance is labeled comprehensively. The annotations are pixel-precise and allow using crops of single instances for articial data augmentation. The dataset covers several challenges highly relevant in the field, such as a limited amount of training data and a high diversity in the test and validation sets. The evaluation of state-of-the-art object detection and instance segmentation methods on D2S reveals significant room for improvement.",
"title": ""
},
{
"docid": "4b1bb1a79d755ea8ccd6f80a8e827b40",
"text": "This paper analyzes the problem of Gaussian process (GP) bandits with deterministic observations. The analysis uses a branch and bound algorithm that is related to the UCB algorithm of (Srinivas et al., 2010). For GPs with Gaussian observation noise, with variance strictly greater than zero, (Srinivas et al., 2010) proved that the regret vanishes at the approximate rate of O ( 1 √ t ) , where t is the number of observations. To complement their result, we attack the deterministic case and attain a much faster exponential convergence rate. Under some regularity assumptions, we show that the regret decreases asymptotically according to O ( e − τt (ln t)d/4 ) with high probability. Here, d is the dimension of the search space and τ is a constant that depends on the behaviour of the objective function near its global maximum.",
"title": ""
},
{
"docid": "902ca8c9a7cd8384143654ee302eca82",
"text": "The Paper presents the outlines of the Field Programmable Gate Array (FPGA) implementation of Real Time speech enhancement by Spectral Subtraction of acoustic noise using Dynamic Moving Average Method. It describes an stand alone algorithm for Speech Enhancement and presents a architecture for the implementation. The traditional Spectral Subtraction method can only suppress stationary acoustic noise from speech by subtracting the spectral noise bias calculated during non-speech activity, while adding the unique option of dynamic moving averaging to it, it can now periodically upgrade the estimation and cope up with changes in noise level. Signal to Noise Ratio (SNR) has been tested at different noisy environment and the improvement in SNR certifies the effectiveness of the algorithm. The FPGA implementation presented in this paper, works on streaming speech signals and can be used in factories, bus terminals, Cellular Phones, or in outdoor conferences where a large number of people have gathered. The Table in the Experimental Result section consolidates our claim of optimum resouce usage.",
"title": ""
},
{
"docid": "06d146f0f44775e05161a90a95f4eca9",
"text": "The authors discuss various filling agents currently available that can be used to augment the lips, correct perioral rhytides, and enhance overall lip appearance. Fillers are compared and information provided about choosing the appropriate agent based on the needs of each patient to achieve the much coveted \"pouty\" look while avoiding hypercorrection. The authors posit that the goal for the upper lip is to create a form that harmonizes with the patient's unique features, taking into account age and ethnicity; the goal for the lower lip is to create bulk, greater prominence, and projection of the vermillion.",
"title": ""
},
{
"docid": "87396c917dd760eddc2d16e27a71e81d",
"text": "We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism-neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation.",
"title": ""
},
{
"docid": "3738d3c5d5bf4a3de55aa638adac07bb",
"text": "The term malware stands for malicious software. It is a program installed on a system without the knowledge of owner of the system. It is basically installed by the third party with the intention to steal some private data from the system or simply just to play pranks. This in turn threatens the computer’s security, wherein computer are used by one’s in day-to-day life as to deal with various necessities like education, communication, hospitals, banking, entertainment etc. Different traditional techniques are used to detect and defend these malwares like Antivirus Scanner (AVS), firewalls, etc. But today malware writers are one step forward towards then Malware detectors. Day-by-day they write new malwares, which become a great challenge for malware detectors. This paper focuses on basis study of malwares and various detection techniques which can be used to detect malwares.",
"title": ""
},
{
"docid": "0851caf6599f97bbeaf68b57e49b4da5",
"text": "Improving the quality of end-of-life care for hospitalized patients is a priority for healthcare organizations. Studies have shown that physicians tend to over-estimate prognoses, which in combination with treatment inertia results in a mismatch between patients wishes and actual care at the end of life. We describe a method to address this problem using Deep Learning and Electronic Health Record (EHR) data, which is currently being piloted, with Institutional Review Board approval, at an academic medical center. The EHR data of admitted patients are automatically evaluated by an algorithm, which brings patients who are likely to benefit from palliative care services to the attention of the Palliative Care team. The algorithm is a Deep Neural Network trained on the EHR data from previous years, to predict all-cause 3–12 month mortality of patients as a proxy for patients that could benefit from palliative care. Our predictions enable the Palliative Care team to take a proactive approach in reaching out to such patients, rather than relying on referrals from treating physicians, or conduct time consuming chart reviews of all patients. We also present a novel interpretation technique which we use to provide explanations of the model's predictions.",
"title": ""
},
{
"docid": "69460a225a498b96b2119f07beebbd29",
"text": "To eliminate the problems related to the Internet Protocol (IP) network, Multi-Protocol Label Switching (MPLS) networks the packets, they use label switching technology at the IP core routers to improve the routing mechanism and to make it more efficient. The developed protocol configure the data packet with fixed labels at the start and the at the end of the MPLS domain, it also allows a service provider to provide value added services like Virtual Private Network (VPNs), MPLS is faster than the standard method of routing and switching packets of the data. MPLS traffic engineering (MPLS TE) provides better utilization of network recourses, while MPLS offers VPN implementation and interconnected with other networks to gain secure and reliable communication, MPLS was improved to support routing functionality on conventional service provider IP Network. MPLS permit service providers to provide customer support services, and It naturally supports Quality of Service (QoS) by providing classification and marked package, avoid congestion, congestion management, Improve traffic, and Signaling. MPLS is not complex at all, and there is no need to any changed in the network structure because it uses one Unified Network Infrastructure. Also, no need to run Border Gateway Protocol (BGP) in the core of MPLS network, this will increase the efficiency of Internet Service Provider (ISP). Therefore MPLS provide the reliability of communication while reducing the delays and supporting the speed of the packet transfer. .",
"title": ""
},
{
"docid": "936cdd4b58881275485739518ccb4f85",
"text": "Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems — BN’s error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN’s usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN’s computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code.",
"title": ""
},
{
"docid": "23c71e8893fceed8c13bf2fc64452bc2",
"text": "Variable stiffness actuators (VSAs) are complex mechatronic devices that are developed to build passively compliant, robust, and dexterous robots. Numerous different hardware designs have been developed in the past two decades to address various demands on their functionality. This review paper gives a guide to the design process from the analysis of the desired tasks identifying the relevant attributes and their influence on the selection of different components such as motors, sensors, and springs. The influence on the performance of different principles to generate the passive compliance and the variation of the stiffness are investigated. Furthermore, the design contradictions during the engineering process are explained in order to find the best suiting solution for the given purpose. With this in mind, the topics of output power, potential energy capacity, stiffness range, efficiency, and accuracy are discussed. Finally, the dependencies of control, models, sensor setup, and sensor quality are addressed.",
"title": ""
},
{
"docid": "7b68933da1bedbc89ebe1fb8b1ca96c4",
"text": "PATIENT\nMale, 0 FINAL DIAGNOSIS: Pallister-Killian syndrome Symptoms: Decidious tooth • flattened nasal bridge • frontal bossing • grooved palate • low-set ears • mid-facial hypoplasia • nuchal fold thickening • right inquinal testis • shortened upper extremities • undescended left intraabdominal testis • widely spaced nipples\n\n\nMEDICATION\n- Clinical Procedure: - Specialty: Pediatrics and Neonatology.\n\n\nOBJECTIVE\nCongenital defects/diseases.\n\n\nBACKGROUND\nPallister-Killian syndrome (PKS) is a rare, sporadic, polydysmorphic condition that often has highly distinctive features. The clinical features are highly variable, ranging from mild to severe intellectual disability and birth defects. We here report the first case of PKS diagnosed at our institution in a patient in the second trimester of pregnancy.\n\n\nCASE REPORT\nA pregnant 43-year-old woman presented for genetic counseling secondary to advanced maternal age and an increased risk for Down syndrome. Ultrasound showed increased fetal nuchal fold thickness, short limbs, polyhydramnios, and a small stomach. The ultrasound evaluation was compromised due to the patient's body habitus. The patient subsequently underwent amniocentesis and the karyotype revealed the presence of an isochromosome in the short arm of chromosome 12 consistent with the diagnosis of Pallister-Killian syndrome. Postnatally, the infant showed frontal bossing, a flattened nasal bridge, mid-facial hypoplasia, low-set ears, a right upper deciduous tooth, grooved palate, nuchal fold thickening, widely spaced nipples, left ulnar polydactyly, simian creases, flexion contractures of the right middle finger, shortened upper extremities, undescended left intraabdominal testis, and right inguinal testis.\n\n\nCONCLUSIONS\nThe occurrence of PKS is sporadic in nature, but prenatal diagnosis is possible.",
"title": ""
},
{
"docid": "c4b4c647e13d0300845bed2b85c13a3c",
"text": "Several end-to-end deep learning approaches have been recently presented which extract either audio or visual features from the input images or audio signals and perform speech recognition. However, research on end-to-end audiovisual models is very limited. In this work, we present an end-to-end audiovisual model based on residual networks and Bidirectional Gated Recurrent Units (BGRUs). To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW). The model consists of two streams, one for each modality, which extract features directly from mouth regions and raw waveforms. The temporal dynamics in each stream/modality are modeled by a 2-layer BGRU and the fusion of multiple streams/modalities takes place via another 2-layer BGRU. A slight improvement in the classification rate over an end-to-end audio-only and MFCC-based model is reported in clean audio conditions and low levels of noise. In presence of high levels of noise, the end-to-end audiovisual model significantly outperforms both audio-only models.",
"title": ""
},
{
"docid": "8d2b28892efc5cf4ab228fc599f5e91f",
"text": "Will reading habit influence your life? Many say yes. Reading cooperative control of distributed multi agent systems is a good habit; you can develop this habit to be such interesting way. Yeah, reading habit will not only make you have any favourite activity. It will be one of guidance of your life. When reading has become a habit, you will not make it as disturbing activities or as boring activity. You can gain many benefits and importances of reading.",
"title": ""
},
{
"docid": "93f89a636828df50dfe48ffa3e868ea6",
"text": "The reparameterization trick enables the optimization of large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack continuous reparameterizations due to the discontinuous nature of discrete states. In this work we introduce concrete random variables – continuous relaxations of discrete random variables. The concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-likelihood of latent stochastic nodes) on the corresponding discrete graph. We demonstrate their effectiveness on density estimation and structured prediction tasks using neural networks.",
"title": ""
}
] | scidocsrr |
931d129c91a8a84ef68653fc27a5f21d | Named entity recognition in query | [
{
"docid": "419c721c2d0a269c65fae59c1bdb273c",
"text": "Previous work on understanding user web search behavior has focused on how people search and what they are searching for, but not why they are searching. In this paper, we describe a framework for understanding the underlying goals of user searches, and our experience in using the framework to manually classify queries from a web search engine. Our analysis suggests that so-called navigational\" searches are less prevalent than generally believed while a previously unexplored \"resource-seeking\" goal may account for a large fraction of web searches. We also illustrate how this knowledge of user search goals might be used to improve future web search engines.",
"title": ""
}
] | [
{
"docid": "758a922ccba0fc70574af94de5a4c2d9",
"text": "We study unsupervised learning by developing a generative model built from progressively learned deep convolutional neural networks. The resulting generator is additionally a discriminator, capable of \"introspection\" in a sense — being able to self-evaluate the difference between its generated samples and the given training data. Through repeated discriminative learning, desirable properties of modern discriminative classifiers are directly inherited by the generator. Specifically, our model learns a sequence of CNN classifiers using a synthesis-by-classification algorithm. In the experiments, we observe encouraging results on a number of applications including texture modeling, artistic style transferring, face modeling, and unsupervised feature learning.",
"title": ""
},
{
"docid": "36e3489f2d144be867fa4f2ff05324d4",
"text": "Sentiment classification of Twitter data has been successfully applied in finding predictions in a variety of domains. However, using sentiment classification to predict stock market variables is still challenging and ongoing research. The main objective of this study is to compare the overall accuracy of two machine learning techniques (logistic regression and neural network) with respect to providing a positive, negative and neutral sentiment for stock-related tweets. Both classifiers are compared using Bigram term frequency (TF) and Unigram term frequency - inverse document term frequency (TF-IDF) weighting schemes. Classifiers are trained using a dataset that contains 42,000 automatically annotated tweets. The training dataset forms positive, negative and neutral tweets covering four technology-related stocks (Twitter, Google, Facebook, and Tesla) collected using Twitter Search API. Classifiers give the same results in terms of overall accuracy (58%). However, empirical experiments show that using Unigram TF-IDF outperforms TF.",
"title": ""
},
{
"docid": "d0c8e58e06037d065944fc59b0bd7a74",
"text": "We propose a new discrete choice model that generalizes the random utility model (RUM). We show that this model, called the Generalized Stochastic Preference (GSP) model can explain several choice phenomena that can’t be represented by a RUM. In particular, the model can easily (and also exactly) replicate some well known examples that are not RUM, as well as controlled choice experiments carried out since 1980’s that possess strong regularity violations. One of such regularity violation is the decoy effect in which the probability of choosing a product increases when a similar, but inferior product is added to the choice set. An appealing feature of the GSP is that it is non-parametric and therefore it has very high flexibility. The model has also a simple description and interpretation: it builds upon the well known representation of RUM as a stochastic preference, by allowing some additional consumer types to be non-rational.",
"title": ""
},
{
"docid": "33b405dbbe291f6ba004fa6192501861",
"text": "A quasi-static analysis of an open-ended coaxial line terminated by a semi-infinite medium on ground plane is presented in this paper. The analysis is based on a vtiriation formulation of the problem. A comparison of results obtained by this method with the experimental and the other theoretical approaches shows an excellent agreement. This analysis is expected to be helpful in the inverse problem of calculating the pertnittivity of materials in oico for a given iuput impedance of the coaxial line.",
"title": ""
},
{
"docid": "369cdea246738d5504669e2f9581ae70",
"text": "Content Security Policy (CSP) is an emerging W3C standard introduced to mitigate the impact of content injection vulnerabilities on websites. We perform a systematic, large-scale analysis of four key aspects that impact on the effectiveness of CSP: browser support, website adoption, correct configuration and constant maintenance. While browser support is largely satisfactory, with the exception of few notable issues, our analysis unveils several shortcomings relative to the other three aspects. CSP appears to have a rather limited deployment as yet and, more crucially, existing policies exhibit a number of weaknesses and misconfiguration errors. Moreover, content security policies are not regularly updated to ban insecure practices and remove unintended security violations. We argue that many of these problems can be fixed by better exploiting the monitoring facilities of CSP, while other issues deserve additional research, being more rooted into the CSP design.",
"title": ""
},
{
"docid": "c776fccb35d9aa43e965604573156c6a",
"text": "BACKGROUND\nMalnutrition in children is a major public health concern. This study aimed to determine the association between dietary diversity and stunting, underweight, wasting, and diarrhea and that between consumption of each specific food group and these nutritional and health outcomes among children.\n\n\nMETHODS\nA nationally representative household survey of 6209 children aged 12 to 59 months was conducted in Cambodia. We examined the consumption of food in the 24 hours before the survey and stunting, underweight, wasting, and diarrhea that had occurred in the preceding 2 weeks. A food variety score (ranging from 0 to 9) was calculated to represent dietary diversity.\n\n\nRESULTS\nStunting was negatively associated with dietary diversity (adjusted odd ratios [ORadj] 0.95, 95% confident interval [CI] 0.91-0.99, P = 0.01) after adjusting for socioeconomic and geographical factors. Consumption of animal source foods was associated with reduced risk of stunting (ORadj 0.69, 95% CI 0.54-0.89, P < 0.01) and underweight (ORadj 0.74, 95% CI 0.57-0.96, P = 0.03). On the other hand, the higher risk of diarrhea was significantly associated with consumption of milk products (ORadj 1.46, 95% CI 1.10-1.92, P = 0.02) and it was significantly pronounced among children from the poorer households (ORadj 1.85, 95% CI 1.17-2.93, P < 0.01).\n\n\nCONCLUSIONS\nConsumption of a diverse diet was associated with a reduction in stunting. In addition to dietary diversity, animal source food was a protective factor of stunting and underweight. Consumption of milk products was associated with an increase in the risk of diarrhea, particularly among the poorer households. Both dietary diversity and specific food types are important considerations of dietary recommendation.",
"title": ""
},
{
"docid": "aab5aaf24c421cc75fce9b657a886ab4",
"text": "This study aimed to identify the similarities and differences among half-marathon runners in relation to their performance level. Forty-eight male runners were classified into 4 groups according to their performance level in a half-marathon (min): Group 1 (n = 11, < 70 min), Group 2 (n = 13, < 80 min), Group 3 (n = 13, < 90 min), Group 4 (n = 11, < 105 min). In two separate sessions, training-related, anthropometric, physiological, foot strike pattern and spatio-temporal variables were recorded. Significant differences (p<0.05) between groups (ES = 0.55-3.16) and correlations with performance were obtained (r = 0.34-0.92) in training-related (experience and running distance per week), anthropometric (mass, body mass index and sum of 6 skinfolds), physiological (VO2max, RCT and running economy), foot strike pattern and spatio-temporal variables (contact time, step rate and length). At standardized submaximal speeds (11, 13 and 15 km·h-1), no significant differences between groups were observed in step rate and length, neither in contact time when foot strike pattern was taken into account. In conclusion, apart from training-related, anthropometric and physiological variables, foot strike pattern and step length were the only biomechanical variables sensitive to half-marathon performance, which are essential to achieve high running speeds. However, when foot strike pattern and running speeds were controlled (submaximal test), the spatio-temporal variables were similar. This indicates that foot strike pattern and running speed are responsible for spatio-temporal differences among runners of different performance level.",
"title": ""
},
{
"docid": "0946b5cb25e69f86b074ba6d736cd50f",
"text": "Increase of malware and advanced cyber-attacks are now becoming a serious problem. Unknown malware which has not determined by security vendors is often used in these attacks, and it is becoming difficult to protect terminals from their infection. Therefore, a countermeasure for after infection is required. There are some malware infection detection methods which focus on the traffic data comes from malware. However, it is difficult to perfectly detect infection only using traffic data because it imitates benign traffic. In this paper, we propose malware process detection method based on process behavior in possible infected terminals. In proposal, we investigated stepwise application of Deep Neural Networks to classify malware process. First, we train the Recurrent Neural Network (RNN) to extract features of process behavior. Second, we train the Convolutional Neural Network (CNN) to classify feature images which are generated by the extracted features from the trained RNN. The evaluation result in several image size by comparing the AUC of obtained ROC curves and we obtained AUC= 0:96 in best case.",
"title": ""
},
{
"docid": "4874f55e577bea77deed2750a9a73b30",
"text": "Best practice exemplars suggest that digital platforms play a critical role in managing supply chain activities and partnerships that generate perjormance gains for firms. However, there is Umited academic investigation on how and why information technology can create performance gains for firms in a supply chain management (SCM) context. Grant's (1996) theoretical notion of higher-order capabilities and a hierarchy of capabilities has been used in recent information systems research by Barua et al. (2004). Sambamurthy et al. (2003), and Mithas et al. (2004) to reframe the conversation from the direct performance impacts of IT resources and investments to how and why IT shapes higher-order proeess capabilities that ereate performance gains for firms. We draw on the emerging IT-enabled organizational capabilities perspective to suggest that firms that develop IT infrastrueture integration for SCM and leverage it to create a higher-order supply chain integration capability generate significant and sustainable performance gains. A research model is developed to investigate the hierarchy oflT-related capabilities and their impaet on firm performance. Data were collected from } 10 supply chain and logisties managers in manufacturing and retail organizations. Our results suggest that integrated IT infrastructures enable firms to develop the higher-order capability of supply chain process integration. This eapability enables firms to unbundle information flows from physical flows, and to share information with their supply chain partners to create information-based approaches for superior demand planning, for the staging and movement of physical products, and for streamlining voluminous and complex financial work processes. Furthermore. IT-enabled supply chain integration capability results in significant and sustained firm performance gains, especially in operational excellence and revenue growth. Managerial",
"title": ""
},
{
"docid": "ba3e9746291c2a355321125093b41c88",
"text": "Sentiment analysis of microblogs such as Twitter has recently gained a fair amount of attention. One of the simplest sentiment analysis approaches compares the words of a posting against a labeled word list, where each word has been scored for valence, — a “sentiment lexicon” or “affective word lists”. There exist several affective word lists, e.g., ANEW (Affective Norms for English Words) developed before the advent of microblogging and sentiment analysis. I wanted to examine how well ANEW and other word lists performs for the detection of sentiment strength in microblog posts in comparison with a new word list specifically constructed for microblogs. I used manually labeled postings from Twitter scored for sentiment. Using a simple word matching I show that the new word list may perform better than ANEW, though not as good as the more elaborate approach found in SentiStrength.",
"title": ""
},
{
"docid": "f119b0ee9a237ab1e9acdae19664df0f",
"text": "Recent editorials in this journal have defended the right of eminent biologist James Watson to raise the unpopular hypothesis that people of sub-Saharan African descent score lower, on average, than people of European or East Asian descent on tests of general intelligence. As those editorials imply, the scientific evidence is substantial in showing a genetic contribution to these differences. The unjustified ill treatment meted out to Watson therefore requires setting the record straight about the current state of the evidence on intelligence, race, and genetics. In this paper, we summarize our own previous reviews based on 10 categories of evidence: The worldwide distribution of test scores; the g factor of mental ability; heritability differences; brain size differences; trans-racial adoption studies; racial admixture studies; regression-to-the-mean effects; related life-history traits; human origins research; and the poverty of predictions from culture-only explanations. The preponderance of evidence demonstrates that in intelligence, brain size, and other life-history variables, East Asians average a higher IQ and larger brain than Europeans who average a higher IQ and larger brain than Africans. Further, these group differences are 50–80% heritable. These are facts, not opinions and science must be governed by data. There is no place for the ‘‘moralistic fallacy’’ that reality must conform to our social, political, or ethical desires. !c 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "afec537d95b185d8bda4e0e48799bfd3",
"text": "We propose a method for optimizing an acoustic feature extractor for anomalous sound detection (ASD). Most ASD systems adopt outlier-detection techniques because it is difficult to collect a massive amount of anomalous sound data. To improve the performance of such outlier-detection-based ASD, it is essential to extract a set of efficient acoustic features that is suitable for identifying anomalous sounds. However, the ideal property of a set of acoustic features that maximizes ASD performance has not been clarified. By considering outlier-detection-based ASD as a statistical hypothesis test, we defined optimality as an objective function that adopts Neyman-Pearson lemma; the acoustic feature extractor is optimized to extract a set of acoustic features which maximize the true positive rate under an arbitrary false positive rate. The variational auto-encoder is applied as an acoustic feature extractor and optimized to maximize the objective function. We confirmed that the proposed method improved the F-measure score from 0.02 to 0.06 points compared to those of conventional methods, and ASD results of a stereolithography 3D-printer in a real-environment show that the proposed method is effective in identifying anomalous sounds.",
"title": ""
},
{
"docid": "ab4cada23ae2142e52c98a271c128c58",
"text": "We introduce an interactive technique for manipulating simple 3D shapes based on extracting them from a single photograph. Such extraction requires understanding of the components of the shape, their projections, and relations. These simple cognitive tasks for humans are particularly difficult for automatic algorithms. Thus, our approach combines the cognitive abilities of humans with the computational accuracy of the machine to solve this problem. Our technique provides the user the means to quickly create editable 3D parts---human assistance implicitly segments a complex object into its components, and positions them in space. In our interface, three strokes are used to generate a 3D component that snaps to the shape's outline in the photograph, where each stroke defines one dimension of the component. The computer reshapes the component to fit the image of the object in the photograph as well as to satisfy various inferred geometric constraints imposed by its global 3D structure. We show that with this intelligent interactive modeling tool, the daunting task of object extraction is made simple. Once the 3D object has been extracted, it can be quickly edited and placed back into photos or 3D scenes, permitting object-driven photo editing tasks which are impossible to perform in image-space. We show several examples and present a user study illustrating the usefulness of our technique.",
"title": ""
},
{
"docid": "f27547cfee95505fe8a2f44f845ddaed",
"text": "High-performance, two-dimensional arrays of parallel-addressed InGaN blue micro-light-emitting diodes (LEDs) with individual element diameters of 8, 12, and 20 /spl mu/m, respectively, and overall dimensions 490 /spl times/490 /spl mu/m, have been fabricated. In order to overcome the difficulty of interconnecting multiple device elements with sufficient step-height coverage for contact metallization, a novel scheme involving the etching of sloped-sidewalls has been developed. The devices have current-voltage (I-V) characteristics approaching those of broad-area reference LEDs fabricated from the same wafer, and give comparable (3-mW) light output in the forward direction to the reference LEDs, despite much lower active area. The external efficiencies of the micro-LED arrays improve as the dimensions of the individual elements are scaled down. This is attributed to scattering at the etched sidewalls of in-plane propagating photons into the forward direction.",
"title": ""
},
{
"docid": "f0f88be4a2b7619f6fb5cdcca1741d1f",
"text": "BACKGROUND\nThere is no evidence from randomized trials to support a strategy of lowering systolic blood pressure below 135 to 140 mm Hg in persons with type 2 diabetes mellitus. We investigated whether therapy targeting normal systolic pressure (i.e., <120 mm Hg) reduces major cardiovascular events in participants with type 2 diabetes at high risk for cardiovascular events.\n\n\nMETHODS\nA total of 4733 participants with type 2 diabetes were randomly assigned to intensive therapy, targeting a systolic pressure of less than 120 mm Hg, or standard therapy, targeting a systolic pressure of less than 140 mm Hg. The primary composite outcome was nonfatal myocardial infarction, nonfatal stroke, or death from cardiovascular causes. The mean follow-up was 4.7 years.\n\n\nRESULTS\nAfter 1 year, the mean systolic blood pressure was 119.3 mm Hg in the intensive-therapy group and 133.5 mm Hg in the standard-therapy group. The annual rate of the primary outcome was 1.87% in the intensive-therapy group and 2.09% in the standard-therapy group (hazard ratio with intensive therapy, 0.88; 95% confidence interval [CI], 0.73 to 1.06; P=0.20). The annual rates of death from any cause were 1.28% and 1.19% in the two groups, respectively (hazard ratio, 1.07; 95% CI, 0.85 to 1.35; P=0.55). The annual rates of stroke, a prespecified secondary outcome, were 0.32% and 0.53% in the two groups, respectively (hazard ratio, 0.59; 95% CI, 0.39 to 0.89; P=0.01). Serious adverse events attributed to antihypertensive treatment occurred in 77 of the 2362 participants in the intensive-therapy group (3.3%) and 30 of the 2371 participants in the standard-therapy group (1.3%) (P<0.001).\n\n\nCONCLUSIONS\nIn patients with type 2 diabetes at high risk for cardiovascular events, targeting a systolic blood pressure of less than 120 mm Hg, as compared with less than 140 mm Hg, did not reduce the rate of a composite outcome of fatal and nonfatal major cardiovascular events. (ClinicalTrials.gov number, NCT00000620.)",
"title": ""
},
{
"docid": "66127055aff890d3f3f9d40bd1875980",
"text": "A simple, but comprehensive model of heat transfer and solidification of the continuous casting of steel slabs is described, including phenomena in the mold and spray regions. The model includes a one-dimensional (1-D) transient finite-difference calculation of heat conduction within the solidifying steel shell coupled with two-dimensional (2-D) steady-state heat conduction within the mold wall. The model features a detailed treatment of the interfacial gap between the shell and mold, including mass and momentum balances on the solid and liquid interfacial slag layers, and the effect of oscillation marks. The model predicts the shell thickness, temperature distributions in the mold and shell, thickness of the resolidified and liquid powder layers, heat-flux profiles down the wide and narrow faces, mold water temperature rise, ideal taper of the mold walls, and other related phenomena. The important effect of the nonuniform distribution of superheat is incorporated using the results from previous threedimensional (3-D) turbulent fluid-flow calculations within the liquid pool. The FORTRAN program CONID has a user-friendly interface and executes in less than 1 minute on a personal computer. Calibration of the model with several different experimental measurements on operating slab casters is presented along with several example applications. In particular, the model demonstrates that the increase in heat flux throughout the mold at higher casting speeds is caused by two combined effects: a thinner interfacial gap near the top of the mold and a thinner shell toward the bottom. This modeling tool can be applied to a wide range of practical problems in continuous casters.",
"title": ""
},
{
"docid": "5491c265a1eb7166bb174097b49d258e",
"text": "The importance of service quality for business performance has been recognized in the literature through the direct effect on customer satisfaction and the indirect effect on customer loyalty. The main objective of the study was to measure hotels' service quality performance from the customer perspective. To do so, a performance-only measurement scale (SERVPERF) was administered to customers stayed in three, four and five star hotels in Aqaba and Petra. Although the importance of service quality and service quality measurement has been recognized, there has been limited research that has addressed the structure and antecedents of the concept for the hotel industry. The clarification of the dimensions is important for managers in the hotel industry as it identifies the bundles of service attributes consumers find important. The results of the study demonstrate that SERVPERF is a reliable and valid tool to measure service quality in the hotel industry. The instrument consists of five dimensions, namely \"tangibles\", \"responsiveness\", \"empathy\", \"assurance\" and \"reliability\". Hotel customers are expecting more improved services from the hotels in all service quality dimensions. However, hotel customers have the lowest perception scores on empathy and tangibles. In the light of the results, possible managerial implications are discussed and future research subjects are recommended.",
"title": ""
},
{
"docid": "e2de8284e14cb3abbd6e3fbcfb5bc091",
"text": "In this paper, novel 2 one-dimensional (1D) Haar-like filtering techniques are proposed as a new and low calculation cost feature extraction method suitable for 3D acceleration signals based human activity recognition. Proposed filtering method is a simple difference filter with variable filter parameters. Our method holds a strong adaptability to various classification problems which no previously studied features (mean, standard deviation, etc.) possessed. In our experiment on human activity recognition, the proposed method achieved both the highest recognition accuracy of 93.91% while reducing calculation cost to 21.22% compared to previous method.",
"title": ""
},
{
"docid": "9415adaa3ec2f7873a23cc2017a2f1ee",
"text": "In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.",
"title": ""
},
{
"docid": "bf00f7d7cdcbdc3e9d082bf92eec075c",
"text": "Network software is a critical component of any distributed system. Because of its complexity, network software is commonly layered into a hierarchy of protocols, or more generally, into a protocol graph. Typical protocol graphs—including those standardized in the ISO and TCP/IP network architectures—share three important properties; the protocol graph is simple, the nodes of the graph (protocols) encapsulate complex functionality, and the topology of the graph is relatively static. This paper describes a new way to organize network software that differs from conventional architectures in all three of these properties. In our approach, the protocol graph is complex, individual protocols encapsulate a single function, and the topology of the graph is dynamic. The main contribution of this paper is to describe the ideas behind our new architecture, illustrate the advantages of using the architecture, and demonstrate that the architecture results in efficient network software.",
"title": ""
}
] | scidocsrr |
66288ac8ed76e5a13886c97d89aba672 | Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems | [
{
"docid": "7d4fa882673f142c4faa8a4ff3c2a205",
"text": "This paper presents a different perspective on diversity in search results: diversity by proportionality. We consider a result list most diverse, with respect to some set of topics related to the query, when the number of documents it provides on each topic is proportional to the topic's popularity. Consequently, we propose a framework for optimizing proportionality for search result diversification, which is motivated by the problem of assigning seats to members of competing political parties. Our technique iteratively determines, for each position in the result ranked list, the topic that best maintains the overall proportionality. It then selects the best document on this topic for this position. We demonstrate empirically that our method significantly outperforms the top performing approach in the literature not only on our proposed metric for proportionality, but also on several standard diversity measures. This result indicates that promoting proportionality naturally leads to minimal redundancy, which is a goal of the current diversity approaches.",
"title": ""
},
{
"docid": "b796a957545aa046bad14d44c4578700",
"text": "Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced “sibling” precision metric, where our method also obtains excellent results.",
"title": ""
},
{
"docid": "1a968e8cf7c35cc6ed36de0a8cccd9f0",
"text": "Random walks have been successfully used to measure user or object similarities in collaborative filtering (CF) recommender systems, which is of high accuracy but low diversity. A key challenge of a CF system is that the reliably accurate results are obtained with the help of peers' recommendation, but the most useful individual recommendations are hard to be found among diverse niche objects. In this paper we investigate the direction effect of the random walk on user similarity measurements and find that the user similarity, calculated by directed random walks, is reverse to the initial node's degree. Since the ratio of small-degree users to large-degree users is very large in real data sets, the large-degree users' selections are recommended extensively by traditional CF algorithms. By tuning the user similarity direction from neighbors to the target user, we introduce a new algorithm specifically to address the challenge of diversity of CF and show how it can be used to solve the accuracy-diversity dilemma. Without relying on any context-specific information, we are able to obtain accurate and diverse recommendations, which outperforms the state-of-the-art CF methods. This work suggests that the random-walk direction is an important factor to improve the personalized recommendation performance.",
"title": ""
}
] | [
{
"docid": "79c7bf1036877ca867da7595e8cef6e2",
"text": "A two-process theory of human information processing is proposed and applied to detection, search, and attention phenomena. Automatic processing is activation of a learned sequence of elements in long-term memory that is initiated by appropriate inputs and then proceeds automatically—without subject control, without stressing the capacity limitations of the system, and without necessarily demanding attention. Controlled processing is a temporary activation of a sequence of elements that can be set up quickly and easily but requires attention, is capacity-limited (usually serial in nature), and is controlled by the subject. A series of studies using both reaction time and accuracy measures is presented, which traces these concepts in the form of automatic detection and controlled, search through the areas of detection, search, and attention. Results in these areas are shown to arise from common mechanisms. Automatic detection is shown to develop following consistent mapping of stimuli to responses over trials. Controlled search is utilized in varied-mapping paradigms, and in our studies, it takes the form of serial, terminating search. The approach resolves a number of apparent conflicts in the literature.",
"title": ""
},
{
"docid": "ebc107147884d89da4ef04eba2d53a73",
"text": "Twitter sentiment analysis (TSA) has become a hot research topic in recent years. The goal of this task is to discover the attitude or opinion of the tweets, which is typically formulated as a machine learning based text classification problem. Some methods use manually labeled data to train fully supervised models, while others use some noisy labels, such as emoticons and hashtags, for model training. In general, we can only get a limited number of training data for the fully supervised models because it is very labor-intensive and time-consuming to manually label the tweets. As for the models with noisy labels, it is hard for them to achieve satisfactory performance due to the noise in the labels although it is easy to get a large amount of data for training. Hence, the best strategy is to utilize both manually labeled data and noisy labeled data for training. However, how to seamlessly integrate these two different kinds of data into the same learning framework is still a challenge. In this paper, we present a novel model, called emoticon smoothed language model (ESLAM), to handle this challenge. The basic idea is to train a language model based on the manually labeled data, and then use the noisy emoticon data for smoothing. Experiments on real data sets demonstrate that ESLAM can effectively integrate both kinds of data to outperform those methods using only one of them.",
"title": ""
},
{
"docid": "cd71e990546785bd9ba0c89620beb8d2",
"text": "Crime is one of the most predominant and alarming aspects in our society and its prevention is a vital task. Crime analysis is a systematic way of detecting and investigating patterns and trends in crime. In this work, we use various clustering approaches of data mining to analyse the crime data of Tamilnadu. The crime data is extracted from National Crime Records Bureau (NCRB) of India. It consists of crime information about six cities namely Chennai, Coimbatore, Salem, Madurai, Thirunelvelli and Thiruchirapalli from the year 2000–2014 with 1760 instances and 9 attributes to represent the instances. K-Means clustering, Agglomerative clustering and Density Based Spatial Clustering with Noise (DBSCAN) algorithms are used to cluster crime activities based on some predefined cases and the results of these clustering are compared to find the best suitable clustering algorithm for crime detection. The result of K-Means clustering algorithm is visualized using Google Map for interactive and easy understanding. The K-Nearest Neighbor (KNN) classification is used for crime prediction. The performance of each clustering algorithms are evaluated using the metrics such as precision, recall and F-measure, and the results are compared. This work helps the law enforcement agencies to predict and detect crimes in Tamilnadu with improved accuracy and thus reduces the crime rate.",
"title": ""
},
{
"docid": "a531694dba7fc479b43d0725bc68de15",
"text": "This paper gives an introduction to the essential challenges of software engineering and requirements that software has to fulfill in the domain of automation. Besides, the functional characteristics, specific constraints and circumstances are considered for deriving requirements concerning usability, the technical process, the automation functions, used platform and the well-established models, which are described in detail. On the other hand, challenges result from the circumstances at different points in the single phases of the life cycle of the automated system. The requirements for life-cycle-management, tools and the changeability during runtime are described in detail.",
"title": ""
},
{
"docid": "27745116e5c05802bda2bc6dc548cce6",
"text": "Recently, many researchers have attempted to classify Facial Attributes (FAs) by representing characteristics of FAs such as attractiveness, age, smiling and so on. In this context, recent studies have demonstrated that visual FAs are a strong background for many applications such as face verification, face search and so on. However, Facial Attribute Classification (FAC) in a wide range of attributes based on the regression representation -predicting of FAs as real-valued labelsis still a significant challenge in computer vision and psychology. In this paper, a regression model formulation is proposed for FAC in a wide range of FAs (e.g. 73 FAs). The proposed method accommodates real-valued scores to the probability of what percentage of the given FAs is present in the input image. To this end, two simultaneous dictionary learning methods are proposed to learn the regression and identity feature dictionaries simultaneously. Accordingly, a multi-level feature extraction is proposed for FAC. Then, four regression classification methods are proposed using a regression model formulated based on dictionary learning, SRC and CRC. Convincing results are",
"title": ""
},
{
"docid": "370054a58b8f50719106508b138bd095",
"text": "In-network aggregation has been proposed as one method for reducing energy consumption in sensor networks. In this paper, we explore two ideas related to further reducing energy consumption in the context of in-network aggregation. The first is by influencing the construction of the routing trees for sensor networks with the goal of reducing the size of transmitted data. To this end, we propose a group-aware network configuration method that “clusters” along the same path sensor nodes that belong to the same group. The second idea involves imposing a hierarchy of output filters on the sensor network with the goal of both reducing the size of transmitted data and minimizing the number of transmitted messages. More specifically, we propose a framework to use temporal coherency tolerances in conjunction with in-network aggregation to save energy at the sensor nodes while maintaining specified quality of data. These tolerances are based on user preferences or can be dictated by the network in cases where the network cannot support the current tolerance level. Our framework, called TiNA, works on top of existing in-network aggregation schemes. We evaluate experimentally our proposed schemes in the context of existing in-network aggregation schemes. We present experimental results measuring energy consumption, response time, and quality of data for Group-By queries. Overall, our schemes provide significant energy savings with respect to communication and a negligible drop in quality of data.",
"title": ""
},
{
"docid": "6be2ecf9323b04c5e93276c9a4ca4b96",
"text": "A printed wide-slot antenna for wideband applications is proposed and experimentally investigated in this communication. A modified L-shaped microstrip line is used to excite the square slot. It consists of a horizontal line, a square patch, and a vertical line. For comparison, a simple L-shaped feed structure with the same line width is used as a reference geometry. The reference antenna exhibits dual resonance (lower resonant frequency <i>f</i><sub>1</sub>, upper resonant frequency <i>f</i><sub>2</sub>). When the square patch is embedded in the middle of the L-shaped line, <i>f</i><sub>1</sub> decreases, <i>f</i><sub>2</sub> remains unchanged, and a new resonance mode is formed between <i>f</i><sub>1</sub> and <i>f</i><sub>2</sub> . Moreover, if the size of the square patch is increased, an additional (fourth) resonance mode is formed above <i>f</i><sub>2</sub>. Thus, the bandwidth of a slot antenna is easily enhanced. The measured results indicate that this structure possesses a wide impedance bandwidth of 118.4%, which is nearly three times that of the reference antenna. Also, a stable radiation pattern is observed inside the operating bandwidth. The gain variation is found to be less than 1.7 dB.",
"title": ""
},
{
"docid": "0a97c254e5218637235a7e23597f572b",
"text": "We investigate the design of a reputation system for decentralized unstructured P2P networks like Gnutella. Having reliable reputation information about peers can form the basis of an incentive system and can guide peers in their decision making (e.g., who to download a file from). The reputation system uses objective criteria to track each peer's contribution in the system and allows peers to store their reputations locally. Reputation are computed using either of the two schemes, debit-credit reputation computation (DCRC) and credit-only reputation computation (CORC). Using a reputation computation agent (RCA), we design a public key based mechanism that periodically updates the peer reputations in a secure, light-weight, and partially distributed manner. We evaluate using simulations the performance tradeoffs inherent in the design of our system.",
"title": ""
},
{
"docid": "328d2b9a5786729245f18195f36ca75c",
"text": "As CMOS technology is scaled down and adopted for many RF and millimeter-wave radio systems, design of T/R switches in CMOS has received considerable attention. Many T/R switches designed in 0.5 ¿m 65 nm CMOS processes have been reported. Table 4 summarizes these T/R switches. Some of them have become great candidates for WLAN and UWB radios. However, none of them met the requirements of mobile cellular and WPAN 60-GHz radios. CMOS device innovations and novel ideas such as artificial dielectric strips and bandgap structures may provide a comprehensive solution to the challenges of design of T/R switches for mobile cellular and 60-GHz radios.",
"title": ""
},
{
"docid": "896dc1862adba0ad504116ba5a0de0b9",
"text": "We present the SnapNet, a system that provides accurate real-time map matching for cellular-based trajectories. Such coarse-grained trajectories introduce new challenges to map matching including (1) input locations that are far from the actual road segment (errors in the orders of kilometers), (2) back-and-forth transitions, and (3) highly sparse input data. SnapNet addresses these challenges by applying extensive preprocessing steps to remove the noisy locations and to handle the data sparseness. At the core of SnapNet is a novel incremental HMM algorithm that combines digital map hints and a number of heuristics to reduce the noise and provide real-time estimation. Evaluation of SnapNet in different cities covering more than 100km distance shows that it can achieve more than 90% accuracy under noisy coarse-grained input location estimates. This maps to over 97% and 34% enhancement in precision and recall respectively when compared to traditional HMM map matching algorithms. Moreover, SnapNet has a low latency of 1.2ms per location estimate.",
"title": ""
},
{
"docid": "30938389f71443136d036a95e465f0ac",
"text": "With the development of autonomous driving, offline testing remains an important process allowing low-cost and efficient validation of vehicle performance and vehicle control algorithms in multiple virtual scenarios. This paper aims to propose a novel simulation platform with hardware in the loop (HIL). This platform comprises of four layers: the vehicle simulation layer, the virtual sensors layer, the virtual environment layer and the Electronic Control Unit (ECU) layer for hardware control. Our platform has attained multiple capabilities: (1) it enables the construction and simulation of kinematic car models, various sensors and virtual testing fields; (2) it performs a closed-loop evaluation of scene perception, path planning, decision-making and vehicle control algorithms, whilst also having multi-agent interaction system; (3) it further enables rapid migrations of control and decision-making algorithms from the virtual environment to real self-driving cars. In order to verify the effectiveness of our simulation platform, several experiments have been performed with self-defined car models in virtual scenarios of a public road and an open parking lot and the results are substantial.",
"title": ""
},
{
"docid": "1ff4d4588826459f1d8d200d658b9907",
"text": "BACKGROUND\nHealth promotion organizations are increasingly embracing social media technologies to engage end users in a more interactive way and to widely disseminate their messages with the aim of improving health outcomes. However, such technologies are still in their early stages of development and, thus, evidence of their efficacy is limited.\n\n\nOBJECTIVE\nThe study aimed to provide a current overview of the evidence surrounding consumer-use social media and mobile software apps for health promotion interventions, with a particular focus on the Australian context and on health promotion targeted toward an Indigenous audience. Specifically, our research questions were: (1) What is the peer-reviewed evidence of benefit for social media and mobile technologies used in health promotion, intervention, self-management, and health service delivery, with regard to smoking cessation, sexual health, and otitis media? and (2) What social media and mobile software have been used in Indigenous-focused health promotion interventions in Australia with respect to smoking cessation, sexual health, or otitis media, and what is the evidence of their effectiveness and benefit?\n\n\nMETHODS\nWe conducted a scoping study of peer-reviewed evidence for the effectiveness of social media and mobile technologies in health promotion (globally) with respect to smoking cessation, sexual health, and otitis media. A scoping review was also conducted for Australian uses of social media to reach Indigenous Australians and mobile apps produced by Australian health bodies, again with respect to these three areas.\n\n\nRESULTS\nThe review identified 17 intervention studies and seven systematic reviews that met inclusion criteria, which showed limited evidence of benefit from these interventions. We also found five Australian projects with significant social media health components targeting the Indigenous Australian population for health promotion purposes, and four mobile software apps that met inclusion criteria. No evidence of benefit was found for these projects.\n\n\nCONCLUSIONS\nAlthough social media technologies have the unique capacity to reach Indigenous Australians as well as other underserved populations because of their wide and instant disseminability, evidence of their capacity to do so is limited. Current interventions are neither evidence-based nor widely adopted. Health promotion organizations need to gain a more thorough understanding of their technologies, who engages with them, why they engage with them, and how, in order to be able to create successful social media projects.",
"title": ""
},
{
"docid": "40cf1e5ecb0e79f466c65f8eaff77cb2",
"text": "Spiral patterns on the surface of a sphere have been seen in laboratory experiments and in numerical simulations of reaction–diffusion equations and convection. We classify the possible symmetries of spirals on spheres, which are quite different from the planar case since spirals typically have tips at opposite points on the sphere. We concentrate on the case where the system has an additional sign-change symmetry, in which case the resulting spiral patterns do not rotate. Spiral patterns arise through a mode interaction between spherical harmonics degree l and l+1. Using the methods of equivariant bifurcation theory, possible symmetry types are determined for each l. For small values of l, the centre manifold equations are constructed and spiral solutions are found explicitly. Bifurcation diagrams are obtained showing how spiral states can appear at secondary bifurcations from primary solutions, or tertiary bifurcations. The results are consistent with numerical simulations of a model pattern-forming system.",
"title": ""
},
{
"docid": "59ba2709e4f3653dcbd3a4c0126ceae1",
"text": "Processing-in-memory (PIM) is a promising solution to address the \"memory wall\" challenges for future computer systems. Prior proposed PIM architectures put additional computation logic in or near memory. The emerging metal-oxide resistive random access memory (ReRAM) has showed its potential to be used for main memory. Moreover, with its crossbar array structure, ReRAM can perform matrix-vector multiplication efficiently, and has been widely studied to accelerate neural network (NN) applications. In this work, we propose a novel PIM architecture, called PRIME, to accelerate NN applications in ReRAM based main memory. In PRIME, a portion of ReRAM crossbar arrays can be configured as accelerators for NN applications or as normal memory for a larger memory space. We provide microarchitecture and circuit designs to enable the morphable functions with an insignificant area overhead. We also design a software/hardware interface for software developers to implement various NNs on PRIME. Benefiting from both the PIM architecture and the efficiency of using ReRAM for NN computation, PRIME distinguishes itself from prior work on NN acceleration, with significant performance improvement and energy saving. Our experimental results show that, compared with a state-of-the-art neural processing unit design, PRIME improves the performance by ~2360× and the energy consumption by ~895×, across the evaluated machine learning benchmarks.",
"title": ""
},
{
"docid": "3ef36b8675faf131da6cbc4d94f0067e",
"text": "The staggering amount of streaming time series coming from the real world calls for more efficient and effective online modeling solution. For time series modeling, most existing works make some unrealistic assumptions such as the input data is of fixed length or well aligned, which requires extra effort on segmentation or normalization of the raw streaming data. Although some literature claim their approaches to be invariant to data length and misalignment, they are too time-consuming to model a streaming time series in an online manner. We propose a novel and more practical online modeling and classification scheme, DDE-MGM, which does not make any assumptions on the time series while maintaining high efficiency and state-of-the-art performance. The derivative delay embedding (DDE) is developed to incrementally transform time series to the embedding space, where the intrinsic characteristics of data is preserved as recursive patterns regardless of the stream length and misalignment. Then, a non-parametric Markov geographic model (MGM) is proposed to both model and classify the pattern in an online manner. Experimental results demonstrate the effectiveness and superior classification accuracy of the proposed DDE-MGM in an online setting as compared to the state-of-the-art.",
"title": ""
},
{
"docid": "2b101f1d43f2e2c657b50054b7188e99",
"text": "Programs that use animations or visualizations attract student interest and offer feedback that can enhance different learning styles as students work to master programming and problem solving. In this paper we report on several CS 1 assignments we have used successfully at Duke University to introduce or reinforce control constructs, elementary data structures, and object-based programming. All the assignments involve either animations by which we mean graphical displays that evolve over time, or visualizations which include static display of graphical images. The animations do not require extensive programming by students since students use classes and code that we provide to hide much of the complexity that drives the animations. In addition to generating enthusiasm, we believe the animations assist with mastering the debugging process.",
"title": ""
},
{
"docid": "61c73842d25b54f24ff974b439d55c64",
"text": "Many electrical vehicles have been developed recently, and one of them is the vehicle type with the self-balancing capability. Portability also one of issue related to the development of electric vehicles. This paper presents one wheeled self-balancing electric vehicle namely PENS-Wheel. Since it only consists of one motor as its actuator, it becomes more portable than any other self-balancing vehicle types. This paper discusses on the implementation of Kalman filter for filtering the tilt sensor used by the self-balancing controller, mechanical design, and fabrication of the vehicle. The vehicle is designed based on the principle of the inverted pendulum by utilizing motor's torque on the wheel to maintain its upright position. The sensor system uses IMU which combine accelerometer and gyroscope data to get the accurate pitch angle of the vehicle. The paper presents the effects of Kalman filter parameters including noise variance of the accelerometer, noise variance of the gyroscope, and the measurement noise to the response of the sensor output. Finally, we present the result of the proposed filter and compare it with proprietary filter algorithm from InvenSense, Inc. running on Digital Motion Processor (DMP) inside the MPU6050 chip. The result of the filter algorithm implemented in the vehicle shows that it is capable in delivering comparable performance with the proprietary one.",
"title": ""
},
{
"docid": "3888dd754c9f7607d7a4cc2f4a436aac",
"text": "We propose a distributed algorithm to estimate the 3D trajectories of multiple cooperative robots from relative pose measurements. Our approach leverages recent results [1] which show that the maximum likelihood trajectory is well approximated by a sequence of two quadratic subproblems. The main contribution of the present work is to show that these subproblems can be solved in a distributed manner, using the distributed Gauss-Seidel (DGS) algorithm. Our approach has several advantages. It requires minimal information exchange, which is beneficial in presence of communication and privacy constraints. It has an anytime flavor: after few iterations the trajectory estimates are already accurate, and they asymptotically convergence to the centralized estimate. The DGS approach scales well to large teams, and it has a straightforward implementation. We test the approach in simulations and field tests, demonstrating its advantages over related techniques.",
"title": ""
},
{
"docid": "d9160f2cc337de729af34562d77a042e",
"text": "Ontologies proliferate with the progress of the Semantic Web. Ontology matching is an important way of establishing interoperability between (Semantic) Web applications that use different but related ontologies. Due to their sizes and monolithic nature, large ontologies regarding real world domains bring a new challenge to the state of the art ontology matching technology. In this paper, we propose a divide-and-conquer approach to matching large ontologies. We develop a structure-based partitioning algorithm, which partitions entities of each ontology into a set of small clusters and constructs blocks by assigning RDF Sentences to those clusters. Then, the blocks from different ontologies are matched based on precalculated anchors, and the block mappings holding high similarities are selected. Finally, two powerful matchers, V-DOC and GMO, are employed to discover alignments in the block mappings. Comprehensive evaluation on both synthetic and real world data sets demonstrates that our approach both solves the scalability problem and achieves good precision and recall with significant reduction of execution time. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "05a93bfe8e245edbe2438a0dc7025301",
"text": "Statistical machine translation (SMT) treats the translation of natural language as a machine learning problem. By examining many samples of human-produced translation, SMT algorithms automatically learn how to translate. SMT has made tremendous strides in less than two decades, and many popular techniques have only emerged within the last few years. This survey presents a tutorial overview of state-of-the-art SMT at the beginning of 2007. We begin with the context of the current research, and then move to a formal problem description and an overview of the four main subproblems: translational equivalence modeling, mathematical modeling, parameter estimation, and decoding. Along the way, we present a taxonomy of some different approaches within these areas. We conclude with an overview of evaluation and notes on future directions. This is a revised draft of a paper currently under review. The contents may change in later drafts. Please send any comments, questions, or corrections to alopez@cs.umd.edu. Feel free to cite as University of Maryland technical report UMIACS-TR-2006-47. The support of this research by the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-2-0001, ONR MURI Contract FCPO.810548265, and Department of Defense contract RD-02-5700 is acknowledged.",
"title": ""
}
] | scidocsrr |
e270440b45d2810de5d62df97acdea83 | Subjective and Objective Quality-of-Experience of Adaptive Video Streaming | [
{
"docid": "a4f3bb1e91fb996858ff438487476217",
"text": "Digital video data, stored in video databases and distributed through communication networks, is subject to various kinds of distortions during acquisition, compression, processing, transmission, and reproduction. For example, lossy video compression techniques, which are almost always used to reduce the bandwidth needed to store or transmit video data, may degrade the quality during the quantization process. For another instance, the digital video bitstreams delivered over error-prone channels, such as wireless channels, may be received imperfectly due to the impairment occurred during transmission. Package-switched communication networks, such as the Internet, can cause loss or severe delay of received data packages, depending on the network conditions and the quality of services. All these transmission errors may result in distortions in the received video data. It is therefore imperative for a video service system to be able to realize and quantify the video quality degradations that occur in the system, so that it can maintain, control and possibly enhance the quality of the video data. An effective image and video quality metric is crucial for this purpose.",
"title": ""
}
] | [
{
"docid": "9ce3f1a67d23425e3920670ac5a1f9b4",
"text": "We examine the limits of consistency in highly available and fault-tolerant distributed storage systems. We introduce a new property—convergence—to explore the these limits in a useful manner. Like consistency and availability, convergence formalizes a fundamental requirement of a storage system: writes by one correct node must eventually become observable to other connected correct nodes. Using convergence as our driving force, we make two additional contributions. First, we close the gap between what is known to be impossible (i.e. the consistency, availability, and partition-tolerance theorem) and known systems that are highly-available but that provide weaker consistency such as causal. Specifically, in an asynchronous system, we show that natural causal consistency, a strengthening of causal consistency that respects the real-time ordering of operations, provides a tight bound on consistency semantics that can be enforced without compromising availability and convergence. In an asynchronous system with Byzantine-failures, we show that it is impossible to implement many of the recently introduced forking-based consistency semantics without sacrificing either availability or convergence. Finally, we show that it is not necessary to compromise availability or convergence by showing that there exist practically useful semantics that are enforceable by available, convergent, and Byzantine-fault tolerant systems.",
"title": ""
},
{
"docid": "ad868d09ec203c2080e0f8458daccf91",
"text": "We present empirical measurements of the packet delivery performance of the latest sensor platforms: Micaz and Telos motes. In this article, we present observations that have implications to a set of common assumptions protocol designers make while designing sensornet protocols—specifically—the MAC and network layer protocols. We first distill these common assumptions in to a conceptual model and show how our observations support or dispute these assumptions. We also present case studies of protocols that do not make these assumptions. Understanding the implications of these observations to the conceptual model can improve future protocol designs.",
"title": ""
},
{
"docid": "f8330ca9f2f4c05c26d679906f65de04",
"text": "In recent years, VDSL2 standard has been gaining popularity as a high speed network access technology to deliver triple play services of video, voice and data. These services require strict quality-of-experience (QoE) and quality-of-services (QoS) on DSL systems operating in an impulse noise environment. The DSL systems, in-turn, are affected severely in the presence of impulse noise in the telephone line. Therefore to improve upon the requirements of IPTV under the impulse noise conditions the standard body has been evaluating various proposals to mitigate and reduce the error rates. This paper lists and qualitatively compares various initiatives that have been suggested in the VDSL2 standard body to improve the protection of VDSL2 services against impulse noise.",
"title": ""
},
{
"docid": "c6c4edf88c38275e82aa73a11ef3a006",
"text": "In this paper, we propose a new concept for understanding the role of algorithms in daily life: algorithmic authority. Algorithmic authority is the legitimate power of algorithms to direct human action and to impact which information is considered true. We use this concept to examine the culture of users of Bit coin, a crypto-currency and payment platform. Through Bit coin, we explore what it means to trust in algorithms. Our study utilizes interview and survey data. We found that Bit coin users prefer algorithmic authority to the authority of conventional institutions, which they see as untrustworthy. However, we argue that Bit coin users do not have blind faith in algorithms, rather, they acknowledge the need for mediating algorithmic authority with human judgment. We examine the tension between members of the Bit coin community who would prefer to integrate Bit coin with existing institutions and those who would prefer to resist integration.",
"title": ""
},
{
"docid": "72eceddfa08e73739022df7c0dc89a3a",
"text": "The empirical mode decomposition (EMD) proposed by Huang et al. in 1998 shows remarkably effective in analyzing nonlinear signals. It adaptively represents nonstationary signals as sums of zero-mean amplitude modulation-frequency modulation (AM-FM) components by iteratively conducting the sifting process. How to determine the boundary conditions of the cubic spline when constructing the envelopes of data is the critical issue of the sifting process. A simple bound hit process technique is presented in this paper which constructs two periodic series from the original data by even and odd extension and then builds the envelopes using cubic spline with periodic boundary condition. The EMD is conducted fluently without any assumptions of the processed data by this approach. An example is presented to pick out the weak modulation of internal waves from an Envisat ASAR image by EMD with the boundary process technique",
"title": ""
},
{
"docid": "535934dc80c666e0d10651f024560d12",
"text": "The following individuals read and discussed the thesis submitted by student Mindy Elizabeth Bennett, and they also evaluated her presentation and response to questions during the final oral examination. They found that the student passed the final oral examination, and that the thesis was satisfactory for a master's degree and ready for any final modifications that they explicitly required. iii ACKNOWLEDGEMENTS During my time of study at Boise State University, I have received an enormous amount of academic support and guidance from a number of different individuals. I would like to take this opportunity to thank everyone who has been instrumental in the completion of this degree. Without the continued support and guidance of these individuals, this accomplishment would not have been possible. I would also like to thank the following individuals for generously giving their time to provide me with the help and support needed to complete this study. Without them, the completion of this study would not have been possible. Breast hypertrophy is a common medical condition whose morbidity has increased over recent decades. Symptoms of breast hypertrophy often include musculoskeletal pain in the neck, back and shoulders, and numerous psychosocial health burdens. To date, reduction mammaplasty (RM) is the only treatment shown to significantly reduce the severity of the symptoms associated with breast hypertrophy. However, due to a lack of scientific evidence in the medical literature justifying the medical necessity of RM, insurance companies often deny requests for coverage of this procedure. Therefore, the purpose of this study is to investigate biomechanical differences in the upper body of women with larger breast sizes in order to provide scientific evidence of the musculoskeletal burdens of breast hypertrophy to the medical community Twenty-two female subjects (average age 25.90, ± 5.47 years) who had never undergone or been approved for breast augmentation surgery, were recruited to participate in this study. Kinematic data of the head, thorax, pelvis and scapula was collected during static trials and during each of four different tasks of daily living. Surface electromyography (sEMG) data from the Midcervical (C-4) Paraspinal, Upper Trapezius, Lower Trapezius, Serratus Anterior, and Erector Spinae muscles were recorded in the same activities. Maximum voluntary contractions (MVC) were used to normalize the sEMG data, and %MVC during each task in the protocol was analyzed. Kinematic data from the tasks of daily living were normalized to average static posture data for each subject. Subjects were …",
"title": ""
},
{
"docid": "76d22feb7da3dbc14688b0d999631169",
"text": "Guilt proneness is a personality trait indicative of a predisposition to experience negative feelings about personal wrongdoing, even when the wrongdoing is private. It is characterized by the anticipation of feeling bad about committing transgressions rather than by guilty feelings in a particular moment or generalized guilty feelings that occur without an eliciting event. Our research has revealed that guilt proneness is an important character trait because knowing a person’s level of guilt proneness helps us to predict the likelihood that they will behave unethically. For example, online studies of adults across the U.S. have shown that people who score high in guilt proneness (compared to low scorers) make fewer unethical business decisions, commit fewer delinquent behaviors, and behave more honestly when they make economic decisions. In the workplace, guilt-prone employees are less likely to engage in counterproductive behaviors that harm their organization.",
"title": ""
},
{
"docid": "61615f5aefb0aa6de2dd1ab207a966d5",
"text": "Wikipedia provides an enormous amount of background knowledge to reason about the semantic relatedness between two entities. We propose Wikipedia-based Distributional Semantics for Entity Relatedness (DiSER), which represents the semantics of an entity by its distribution in the high dimensional concept space derived from Wikipedia. DiSER measures the semantic relatedness between two entities by quantifying the distance between the corresponding high-dimensional vectors. DiSER builds the model by taking the annotated entities only, therefore it improves over existing approaches, which do not distinguish between an entity and its surface form. We evaluate the approach on a benchmark that contains the relative entity relatedness scores for 420 entity pairs. Our approach improves the accuracy by 12% on state of the art methods for computing entity relatedness. We also show an evaluation of DiSER in the Entity Disambiguation task on a dataset of 50 sentences with highly ambiguous entity mentions. It shows an improvement of 10% in precision over the best performing methods. In order to provide the resource that can be used to find out all the related entities for a given entity, a graph is constructed, where the nodes represent Wikipedia entities and the relatedness scores are reflected by the edges. Wikipedia contains more than 4.1 millions entities, which required efficient computation of the relatedness scores between the corresponding 17 trillions of entity-pairs.",
"title": ""
},
{
"docid": "b6dbccc6b04c282ca366eddea77d0107",
"text": "Current methods for annotating and interpreting human genetic variation tend to exploit a single information type (for example, conservation) and/or are restricted in scope (for example, to missense changes). Here we describe Combined Annotation–Dependent Depletion (CADD), a method for objectively integrating many diverse annotations into a single measure (C score) for each variant. We implement CADD as a support vector machine trained to differentiate 14.7 million high-frequency human-derived alleles from 14.7 million simulated variants. We precompute C scores for all 8.6 billion possible human single-nucleotide variants and enable scoring of short insertions-deletions. C scores correlate with allelic diversity, annotations of functionality, pathogenicity, disease severity, experimentally measured regulatory effects and complex trait associations, and they highly rank known pathogenic variants within individual genomes. The ability of CADD to prioritize functional, deleterious and pathogenic variants across many functional categories, effect sizes and genetic architectures is unmatched by any current single-annotation method.",
"title": ""
},
{
"docid": "424239765383edd8079d90f63b3fde1d",
"text": "The availability of huge amounts of medical data leads to the need for powerful data analysis tools to extract useful knowledge. Researchers have long been concerned with applying statistical and data mining tools to improve data analysis on large data sets. Disease diagnosis is one of the applications where data mining tools are proving successful results. Heart disease is the leading cause of death all over the world in the past ten years. Several researchers are using statistical and data mining tools to help health care professionals in the diagnosis of heart disease. Using single data mining technique in the diagnosis of heart disease has been comprehensively investigated showing acceptable levels of accuracy. Recently, researchers have been investigating the effect of hybridizing more than one technique showing enhanced results in the diagnosis of heart disease. However, using data mining techniques to identify a suitable treatment for heart disease patients has received less attention. This paper identifies gaps in the research on heart disease diagnosis and treatment and proposes a model to systematically close those gaps to discover if applying data mining techniques to heart disease treatment data can provide as reliable performance as that achieved in diagnosing heart disease.",
"title": ""
},
{
"docid": "bb49674d0a1f36e318d27525b693e51d",
"text": "prevent attackers from gaining control of the system using well established techniques such as; perimeter-based fire walls, redundancy and replications, and encryption. However, given sufficient time and resources, all these methods can be defeated. Moving Target Defense (MTD), is a defensive strategy that aims to reduce the need to continuously fight against attacks by disrupting attackers gain-loss balance. We present Mayflies, a bio-inspired generic MTD framework for distributed systems on virtualized cloud platforms. The framework enables systems designed to defend against attacks for their entire runtime to systems that avoid attacks in time intervals. We discuss the design, algorithms and the implementation of the framework prototype. We illustrate the prototype with a quorum-based Byzantime Fault Tolerant system and report the preliminary results.",
"title": ""
},
{
"docid": "b847446c0babb9e8ebb8e8d4c50a7023",
"text": "This paper introduces a general technique, called LABurst, for identifying key moments, or moments of high impact, in social media streams without the need for domain-specific information or seed keywords. We leverage machine learning to model temporal patterns around bursts in Twitter's unfiltered public sample stream and build a classifier to identify tokens experiencing these bursts. We show LABurst performs competitively with existing burst detection techniques while simultaneously providing insight into and detection of unanticipated moments. To demonstrate our approach's potential, we compare two baseline event-detection algorithms with our language-agnostic algorithm to detect key moments across three major sporting competitions: 2013 World Series, 2014 Super Bowl, and 2014 World Cup. Our results show LABurst outperforms a time series analysis baseline and is competitive with a domain-specific baseline even though we operate without any domain knowledge. We then go further by transferring LABurst's models learned in the sports domain to the task of identifying earthquakes in Japan and show our method detects large spikes in earthquake-related tokens within two minutes of the actual event.",
"title": ""
},
{
"docid": "d3214d24911a5e42855fd1a53516d30b",
"text": "This paper extends the face detection framework proposed by Viola and Jones 2001 to handle profile views and rotated faces. As in the work of Rowley et al 1998. and Schneiderman et al. 2000, we build different detectors for different views of the face. A decision tree is then trained to determine the viewpoint class (such as right profile or rotated 60 degrees) for a given window of the image being examined. This is similar to the approach of Rowley et al. 1998. The appropriate detector for that viewpoint can then be run instead of running all detectors on all windows. This technique yields good results and maintains the speed advantage of the Viola-Jones detector. Shown as a demo at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 18, 2003 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2003 201 Broadway, Cambridge, Massachusetts 02139 Publication History:– 1. First printing, TR2003-96, July 2003 Fast Multi-view Face Detection Michael J. Jones Paul Viola mjones@merl.com viola@microsoft.com Mitsubishi Electric Research Laboratory Microsoft Research 201 Broadway One Microsoft Way Cambridge, MA 02139 Redmond, WA 98052",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "c726dc2218fa4d286aa10d827b427871",
"text": "Acquisition of the intestinal microbiota begins at birth, and a stable microbial community develops from a succession of key organisms. Disruption of the microbiota during maturation by low-dose antibiotic exposure can alter host metabolism and adiposity. We now show that low-dose penicillin (LDP), delivered from birth, induces metabolic alterations and affects ileal expression of genes involved in immunity. LDP that is limited to early life transiently perturbs the microbiota, which is sufficient to induce sustained effects on body composition, indicating that microbiota interactions in infancy may be critical determinants of long-term host metabolic effects. In addition, LDP enhances the effect of high-fat diet induced obesity. The growth promotion phenotype is transferrable to germ-free hosts by LDP-selected microbiota, showing that the altered microbiota, not antibiotics per se, play a causal role. These studies characterize important variables in early-life microbe-host metabolic interaction and identify several taxa consistently linked with metabolic alterations. PAPERCLIP:",
"title": ""
},
{
"docid": "a4b123705dda7ae3ac7e9e88a50bd64a",
"text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.",
"title": ""
},
{
"docid": "9b7ff8a7dec29de5334f3de8d1a70cc3",
"text": "The paper introduces a complete offline programming toolbox for remote laser welding (RLW) which provides a semi-automated method for computing close-to-optimal robot programs. A workflow is proposed for the complete planning process, and new models and algorithms are presented for solving the optimization problems related to each step of the workflow: the sequencing of the welding tasks, path planning, workpiece placement, calculation of inverse kinematics and the robot trajectory, as well as for generating the robot program code. The paper summarizes the results of an industrial case study on the assembly of a car door using RLW technology, which illustrates the feasibility and the efficiency of the proposed approach.",
"title": ""
},
{
"docid": "178ba744f5e9df6c5a7a704949ad8ac1",
"text": "This software paper describes ‘Stylometry with R’ (stylo), a flexible R package for the highlevel analysis of writing style in stylometry. Stylometry (computational stylistics) is concerned with the quantitative study of writing style, e.g. authorship verification, an application which has considerable potential in forensic contexts, as well as historical research. In this paper we introduce the possibilities of stylo for computational text analysis, via a number of dummy case studies from English and French literature. We demonstrate how the package is particularly useful in the exploratory statistical analysis of texts, e.g. with respect to authorial writing style. Because stylo provides an attractive graphical user interface for high-level exploratory analyses, it is especially suited for an audience of novices, without programming skills (e.g. from the Digital Humanities). More experienced users can benefit from our implementation of a series of standard pipelines for text processing, as well as a number of similarity metrics.",
"title": ""
},
{
"docid": "81bbacc372c1f67e218895bcb046651d",
"text": "Sensor-based activity recognition seeks the profound high-level knowledge about human activities from multitudes of low-level sensor readings. Conventional pattern recognition approaches have made tremendous progress in the past years. However, those methods often heavily rely on heuristic hand-crafted feature extraction, which could hinder their generalization performance. Additionally, existing methods are undermined for unsupervised and incremental learning tasks. Recently, the recent advancement of deep learning makes it possible to perform automatic high-level feature extraction thus achieves promising performance in many areas. Since then, deep learning based methods have been widely adopted for the sensor-based activity recognition tasks. This paper surveys the recent advance of deep learning based sensor-based activity recognition. We summarize existing literature from three aspects: sensor modality, deep model, and application. We also present detailed insights on existing work and propose grand challenges for future research.",
"title": ""
},
{
"docid": "e658507a3ed6c52d27c5db618f9fa8cb",
"text": "Accident prediction is one of the most critical aspects of road safety, whereby an accident can be predicted before it actually occurs and precautionary measures taken to avoid it. For this purpose, accident prediction models are popular in road safety analysis. Artificial intelligence (AI) is used in many real world applications, especially where outcomes and data are not same all the time and are influenced by occurrence of random changes. This paper presents a study on the existing approaches for the detection of unsafe driving patterns of a vehicle used to predict accidents. The literature covered in this paper is from the past 10 years, from 2004 to 2014. AI techniques are surveyed for the detection of unsafe driving style and crash prediction. A number of statistical methods which are used to predict the accidents by using different vehicle and driving features are also covered in this paper. The approaches studied in this paper are compared in terms of datasets and prediction performance. We also provide a list of datasets and simulators available for the scientific community to conduct research in the subject domain. The paper also identifies some of the critical open questions that need to be addressed for road safety using AI techniques.",
"title": ""
}
] | scidocsrr |
931d7404b9114918be2c0087b6cb38c0 | Reliable, Consistent, and Efficient Data Sync for Mobile Apps | [
{
"docid": "64a48cd3af7b029c331921618d05c9ad",
"text": "Cloud-based file synchronization services have become enormously popular in recent years, both for their ability to synchronize files across multiple clients and for the automatic cloud backups they provide. However, despite the excellent reliability that the cloud back-end provides, the loose coupling of these services and the local file system makes synchronized data more vulnerable than users might believe. Local corruption may be propagated to the cloud, polluting all copies on other devices, and a crash or untimely shutdown may lead to inconsistency between a local file and its cloud copy. Even without these failures, these services cannot provide causal consistency. To address these problems, we present ViewBox, an integrated synchronization service and local file system that provides freedom from data corruption and inconsistency. ViewBox detects these problems using ext4-cksum, a modified version of ext4, and recovers from them using a user-level daemon, cloud helper, to fetch correct data from the cloud. To provide a stable basis for recovery,ViewBox employs the view manager on top of ext4-cksum. The view manager creates and exposes views, consistent inmemory snapshots of the file system, which the synchronization client then uploads. Our experiments show that ViewBox detects and recovers from both corruption and inconsistency, while incurring minimal overhead.",
"title": ""
}
] | [
{
"docid": "98356590ae18e09c04be6386559f9946",
"text": "BACKGROUND AND PURPOSE\nInformation has been sparse on the comparison of pulse pressure (PP) and mean arterial pressure (MAP) in relation to ischemic stroke among patients with uncontrolled hypertension. The present study examined the relation among PP, MAP, and ischemic stroke in uncontrolled hypertensive subjects in China.\n\n\nMETHODS\nA total of 6104 uncontrolled hypertensive subjects aged > or = 35 years were screened with a stratified cluster multistage sampling scheme in Fuxin county of Liaoning province of China, of which 317 had ischemic stroke.\n\n\nRESULTS\nAfter multivariable adjustment for age, gender, and other confounders, individuals with the highest quartile of PP and MAP had ORs for ischemic stroke of 1.479 (95% CI: 1.027 to 2.130) and 2.000 (95% CI: 1.373 to 2.914) with the lowest quartile as the reference. Adjusted ORs for ischemic stroke were 1.306 for MAP and 1.118 for PP with an increment of 1 SD, respectively. Ischemic stroke prediction of PP was annihilated when PP and MAP were entered in a single model. In patients aged < 65 years, on a continuous scale using receive operating characteristics curve, ischemic stroke was predicted by PP (P=0.001) and MAP (P<0.001). The area under the curve of PP (0.570, 95% CI: 0.531 to 0.609) differed from the area under the curve of MAP (0.633, 95% CI: 0.597 to 0.669; P<0.05). Among patients aged > or = 65 years, presence of ischemic stroke was only predicted by MAP.\n\n\nCONCLUSIONS\nPP and MAP were both associated with ischemic stroke. Ischemic stroke prediction of PP depended on MAP. On a continuous scale, MAP better predicted ischemic stroke than PP did in diagnostic accuracy.",
"title": ""
},
{
"docid": "543348825e8157926761b2f6a7981de2",
"text": "With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones. Specifically, we propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $$ norm CS reconstruction model. To cast ISTA into deep network form, we develop an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms. All the parameters in ISTA-Net (e.g. nonlinear transforms, shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than being hand-crafted. Moreover, considering that the residuals of natural images are more compressible, an enhanced version of ISTA-Net in the residual domain, dubbed ISTA-Net+, is derived to further improve CS reconstruction. Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform existing state-of-the-art optimization-based and network-based CS methods by large margins, while maintaining fast computational speed. Our source codes are available: http://jianzhang.tech/projects/ISTA-Net.",
"title": ""
},
{
"docid": "c976fcbe0c095a4b7cfd6e3968964c55",
"text": "The introduction of Network Functions Virtualization (NFV) enables service providers to offer software-defined network functions with elasticity and flexibility. Its core technique, dynamic allocation procedure of NFV components onto cloud resources requires rapid response to changes on-demand to remain cost and QoS effective. In this paper, Markov Decision Process (MDP) is applied to the NP-hard problem to dynamically allocate cloud resources for NFV components. In addition, Bayesian learning method is applied to monitor the historical resource usage in order to predict future resource reliability. Experimental results show that our proposed strategy outperforms related approaches.",
"title": ""
},
{
"docid": "8f21eee8a4320baebe0fe40364f6580e",
"text": "The dup system related subjects others recvfrom and user access methods. The minimal facilities they make up. A product before tackling 'the design, decisions they probably should definitely. Multiplexer'' interprocess communication in earlier addison wesley has the important features a tutorial. Since some operating system unstructured devices a process init see. At berkeley software in earlier authoritative technical information on write operations. The lowest unused multiprocessor support for, use this determination. No name dot spelled with the system. Later it a file several, reasons often single user interfacesis excluded except.",
"title": ""
},
{
"docid": "c86b44aef6e23d4a61e6a062a7a50883",
"text": "In this paper we investigate the applications of Elo ratings (originally designed for 2-player chess) to a heterogeneous nonlinear multiagent system to determine an agent’s overall impact on its team’s performance. Measuring this impact has been attempted in many different ways, including reward shaping; the generation of heirarchies, holarchies, and teams; mechanism design; and the creation of subgoals. We show that in a multiagent system, an Elo rating will accurately reflect the an agent’s ability to contribute positively to a team’s success with no need for any other feedback than a repeated binary win/loss signal. The Elo rating not only measures “personal” success, but simultaneously success in assisting other agents to perform favorably.",
"title": ""
},
{
"docid": "7e68ac0eee3ab3610b7c68b69c27f3b6",
"text": "When digitizing a document into an image, it is common to include a surrounding border region to visually indicate that the entire document is present in the image. However, this border should be removed prior to automated processing. In this work, we present a deep learning system, PageNet, which identifies the main page region in an image in order to segment content from both textual and non-textual border noise. In PageNet, a Fully Convolutional Network obtains a pixel-wise segmentation which is post-processed into a quadrilateral region. We evaluate PageNet on 4 collections of historical handwritten documents and obtain over 94% mean intersection over union on all datasets and approach human performance on 2 collections. Additionally, we show that PageNet can segment documents that are overlayed on top of other documents.",
"title": ""
},
{
"docid": "1b69388c83a0883b3eeddc47ce44b82a",
"text": "1 Lawrence E. Whitman, Wichita State University, Industrial & Manufacturing Engineering Department, 120G Engineering Building, Wichita, KS 672600035 larry.whitman@wichita.edu 2 Tonya A. Witherspoon, Wichita State University, College of Education, 156C Corbin Education Center, Wichita, KS 67260-0131 tonya.witherspoon@wichita.edu Abstract Wichita State University is actively using LEGOs to encourage science math engineering and technology (SMET). There are two major thrusts in our efforts. The college of engineering uses LEGO blocks to simulate a factory environment in the building of LEGO airplanes. This participative demonstration has been used at middle school, high school, and college classes. LEGOs are used to present four manufacturing scenarios of traditional, cellular, pull, and single piece flow manufacturing. The demonstration presents to students how the design of a factory has significant impact on the success of the company. It also encourages students to pursue engineering careers. The college of education uses robotics as a vehicle to integrate technology and engineering into math and science preservice and inservice teacher education.. The purpose is to develop technologically astute and competent teachers who are capable of integrating technology into their curriculum to improve the teaching and learning of their students. This paper will discuss each effort, the collaboration between the two, and provide examples of success.",
"title": ""
},
{
"docid": "22241857a42ffcad817356900f52df66",
"text": "Most of the intensive care units (ICU) are equipped with commercial pulse oximeters for monitoring arterial blood oxygen saturation (SpO2) and pulse rate (PR). Photoplethysmographic (PPG) data recorded from pulse oximeters usually corrupted by motion artifacts (MA), resulting in unreliable and inaccurate estimated measures of SpO2. In this paper, a simple and efficient MA reduction method based on Ensemble Empirical Mode Decomposition (E2MD) is proposed for the estimation of SpO2 from processed PPGs. Performance analysis of the proposed E2MD is evaluated by computing the statistical and quality measures indicating the signal reconstruction like SNR and NRMSE. Intentionally created MAs (Horizontal MA, Vertical MA and Bending MA) in the recorded PPGs are effectively reduced by the proposed one and proved to be the best suitable method for reliable and accurate SpO2 estimation from the processed PPGs.",
"title": ""
},
{
"docid": "a286f9f594ef563ba082fb454eddc8bc",
"text": "The visual inspection of Mura defects is still a challenging task in the quality control of panel displays because of the intrinsically nonuniform brightness and blurry contours of these defects. The current methods cannot detect all Mura defect types simultaneously, especially small defects. In this paper, we introduce an accurate Mura defect visual inspection (AMVI) method for the fast simultaneous inspection of various Mura defect types. The method consists of two parts: an outlier-prejudging-based image background construction (OPBC) algorithm is proposed to quickly reduce the influence of image backgrounds with uneven brightness and to coarsely estimate the candidate regions of Mura defects. Then, a novel region-gradient-based level set (RGLS) algorithm is applied only to these candidate regions to quickly and accurately segment the contours of the Mura defects. To demonstrate the performance of AMVI, several experiments are conducted to compare AMVI with other popular visual inspection methods are conducted. The experimental results show that AMVI tends to achieve better inspection performance and can quickly and accurately inspect a greater number of Mura defect types, especially for small and large Mura defects with uneven backlight. Note to Practitioners—The traditional Mura visual inspection method can address only medium-sized Mura defects, such as region Mura, cluster Mura, and vertical-band Mura, and is not suitable for small Mura defects, for example, spot Mura. The proposed accurate Mura defect visual inspection (AMVI) method can accurately and simultaneously inspect not only medium-sized Mura defects but also small and large Mura defects. The proposed outlier-prejudging-based image background construction (OPBC) algorithm of the AMVI method is employed to improve the Mura true detection rate, while the proposed region-gradient-based level set (RGLS) algorithm is used to reduce the Mura false detection rate. Moreover, this method can be applied to online vision inspection: OPBC can be implemented in parallel processing units, while RGLS is applied only to the candidate regions of the inspected image. In addition, AMVI can be extended to other low-contrast defect vision inspection tasks, such as the inspection of glass, steel strips, and ceramic tiles.",
"title": ""
},
{
"docid": "4703b02dc285a55002f15d06d98251e7",
"text": "Nowadays, most Photovoltaic installations are grid connected system. From distribution system point of view, the main point and concern related to PV grid-connected are overvoltage or overcurrent in the distribution network. This paper describes the simulation study which focuses on ferroresonance phenomenon of PV system on lower side of distribution transformer. PSCAD program is selected to simulate the ferroresonance phenomenon in this study. The example of process that creates ferroresonance by the part of PV system and ferroresonance effect will be fully described in detail.",
"title": ""
},
{
"docid": "37ef43a6ed0dcf0817510b84224d9941",
"text": "Contrast enhancement is one of the most important issues of image processing, pattern recognition and computer vision. The commonly used techniques for contrast enhancement fall into two categories: (1) indirect methods of contrast enhancement and (2) direct methods of contrast enhancement. Indirect approaches mainly modify histogram by assigning new values to the original intensity levels. Histogram speci\"cation and histogram equalization are two popular indirect contrast enhancement methods. However, histogram modi\"cation technique only stretches the global distribution of the intensity. The basic idea of direct contrast enhancement methods is to establish a criterion of contrast measurement and to enhance the image by improving the contrast measure. The contrast can be measured globally and locally. It is more reasonable to de\"ne a local contrast when an image contains textual information. Fuzzy logic has been found many applications in image processing, pattern recognition, etc. Fuzzy set theory is a useful tool for handling the uncertainty in the images associated with vagueness and/or imprecision. In this paper, we propose a novel adaptive direct fuzzy contrast enhancement method based on the fuzzy entropy principle and fuzzy set theory. We have conducted experiments on many images. The experimental results demonstrate that the proposed algorithm is very e!ective in contrast enhancement as well as in preventing over-enhancement. ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8385f72bd060eee8c59178bc0b74d1e3",
"text": "Gesture recognition plays an important role in human-computer interaction. However, most existing methods are complex and time-consuming, which limit the use of gesture recognition in real-time environments. In this paper, we propose a static gesture recognition system that combines depth information and skeleton data to classify gestures. Through feature fusion, hand digit gestures of 0-9 can be recognized accurately and efficiently. According to the experimental results, the proposed gesture recognition system is effective and robust, which is invariant to complex background, illumination changes, reversal, structural distortion, rotation etc. We have tested the system both online and offline which proved that our system is satisfactory to real-time requirements, and therefore it can be applied to gesture recognition in real-world human-computer interaction systems.",
"title": ""
},
{
"docid": "cc56706151e027c89eea5639486d4cd3",
"text": "To refine user interest profiling, this paper focuses on extending scientific subject ontology via keyword clustering and on improving the accuracy and effectiveness of recommendation of the electronic academic publications in online services. A clustering approach is proposed for domain keywords for the purpose of the subject ontology extension. Based on the keyword clusters, the construction of user interest profiles is presented on a rather fine granularity level. In the construction of user interest profiles, we apply two types of interest profiles: explicit profiles and implicit profiles. The explicit eighted keyword graph",
"title": ""
},
{
"docid": "7c6708511e8a19c7a984ccc4b5c5926e",
"text": "INTRODUCTION\nOtoplasty or correction of prominent ears, is one of most commonly performed surgeries in plastic surgery both in children and adults. Until nowadays, there have been more than 150 techniques described, but all with certain percentage of recurrence which varies from just a few up to 24.4%.\n\n\nOBJECTIVE\nThe authors present an otoplasty technique, a combination of Mustardé's original procedure with other techniques, which they have been using successfully in their everyday surgical practice for the last 9 years. The technique is based on posterior antihelical and conchal approach.\n\n\nMETHODS\nThe study included 102 patients (60 males and 42 females) operated on between 1999 and 2008. The age varied between 6 and 49 years. Each procedure was tailored to the aberrant anatomy which was analysed after examination. Indications and the operative procedure are described in step-by-step detail accompanied by drawings and photos taken during the surgery.\n\n\nRESULTS\nAll patients had bilateral ear deformity. In all cases was performed a posterior antihelical approach. The conchal reduction was done only when necessary and also through the same incision. The follow-up was from 1 to 5 years. There were no recurrent cases. A few minor complications were presented. Postoperative care, complications and advantages compared to other techniques are discussed extensively.\n\n\nCONCLUSION\nAll patients showed a high satisfaction rate with the final result and there was no necessity for further surgeries. The technique described in this paper is easy to reproduce even for young surgeons.",
"title": ""
},
{
"docid": "af691c2ca5d9fd1ca5109c8b2e7e7b6d",
"text": "As social robots become more widely used as educational tutoring agents, it is important to study how children interact with these systems, and how effective they are as assessed by learning gains, sustained engagement, and perceptions of the robot tutoring system as a whole. In this paper, we summarize our prior work involving a long-term child-robot interaction study and outline important lessons learned regarding individual differences in children. We then discuss how these lessons inform future research in child-robot interaction.",
"title": ""
},
{
"docid": "e6e7ee19b958b40abeed760be50f2583",
"text": "All distributed-generation units need to be equipped with an anti-islanding protection (AIP) scheme in order to avoid unintentional islanding. Unfortunately, most AIP methods fail to detect islanding if the demand in the islanded circuit matches the production in the island. Another concern is that many active AIP schemes cause power-quality problems. This paper proposes an AIP method which is based on the combination of a reactive power versus frequency droop and rate of change of frequency (ROCOF). The method is designed so that the injection of reactive power is of minor scale during normal operating conditions. Yet, the method can rapidly detect islanding which is verified by PSCAD/EMTDC simulations.",
"title": ""
},
{
"docid": "b6de6f391c11178843bc16b51bf26803",
"text": "Crowd analysis becomes very popular research topic in the area of computer vision. A growing requirement for smarter video surveillance of private and public space using intelligent vision systems which can differentiate what is semantically important in the direction of the human observer as normal behaviors and abnormal behaviors. People counting, people tracking and crowd behavior analysis are different stages for computer based crowd analysis algorithm. This paper focus on crowd behavior analysis which can detect normal behavior or abnormal behavior.",
"title": ""
},
{
"docid": "a56a3592d704c917d5e8452eabb74cb0",
"text": "Current text-to-speech synthesis (TTS) systems are often perceived as lacking expressiveness, limiting the ability to fully convey information. This paper describes initial investigations into improving expressiveness for statistical speech synthesis systems. Rather than using hand-crafted definitions of expressive classes, an unsupervised clustering approach is described which is scalable to large quantities of training data. To incorporate this “expression cluster” information into an HMM-TTS system two approaches are described: cluster questions in the decision tree construction; and average expression speech synthesis (AESS) using cluster-based linear transform adaptation. The performance of the approaches was evaluated on audiobook data in which the reader exhibits a wide range of expressiveness. A subjective listening test showed that synthesising with AESS results in speech that better reflects the expressiveness of human speech than a baseline expression-independent system.",
"title": ""
},
{
"docid": "1d7bbd7aaa65f13dd72ffeecc8499cb6",
"text": "Due to the 60Hz or higher LCD refresh operations, display controller (DC) reads the pixels out from frame buffer at fixed rate. Accessing frame buffer consumes not only memory bandwidth, but power as well. Thus frame buffer compression (FBC) can contribute to alleviating both bandwidth and power consumption. A conceptual frame buffer compression model is proposed, and to the best of our knowledge, an arithmetic expression concerning the compression ratio and the read/update ratio of frame buffer is firstly presented, which reveals the correlation between frame buffer compression and target applications. Moreover, considering the linear access feature of frame buffer, we investigate a frame buffer compression without color information loss, named LFBC (Loss less Frame-Buffer Compression). LFBC defines new frame buffer compression data format, and employs run-length encoding (RLE) to implement the compression. For the applications suitable for frame buffer compression, LFBC reduces 50%90% bandwidth consumption and memory accesses caused by LCD refresh operations.",
"title": ""
},
{
"docid": "5f4d10a1a180f6af3d35ca117cd4ee19",
"text": "This work addresses the task of instance-aware semantic segmentation. Our key motivation is to design a simple method with a new modelling-paradigm, which therefore has a different trade-off between advantages and disadvantages compared to known approaches. Our approach, we term InstanceCut, represents the problem by two output modalities: (i) an instance-agnostic semantic segmentation and (ii) all instance-boundaries. The former is computed from a standard convolutional neural network for semantic segmentation, and the latter is derived from a new instance-aware edge detection model. To reason globally about the optimal partitioning of an image into instances, we combine these two modalities into a novel MultiCut formulation. We evaluate our approach on the challenging CityScapes dataset. Despite the conceptual simplicity of our approach, we achieve the best result among all published methods, and perform particularly well for rare object classes.",
"title": ""
}
] | scidocsrr |
dcb9d91a28cd9d6a48e0e66d2a8bfe72 | LEARNING DEEP MODELS: CRITICAL POINTS AND LOCAL OPENNESS | [
{
"docid": "174cc0eae96aeb79841b1acfb4813dd4",
"text": "In this paper, we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the numbers of observations are fewer than the number of parameters in the model. We show that with the quadratic activations, the optimization landscape of training, such shallow neural networks, has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics. This result holds for an arbitrary training data of input/output pairs. For differentiable activation functions, we also show that gradient descent, when suitably initialized, converges at a linear rate to a globally optimal model. This result focuses on a realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted weight coefficients.",
"title": ""
},
{
"docid": "ad7862047259112ac01bfa68950cf95b",
"text": "In deep learning, depth, as well as nonlinearity, create non-convex loss surfaces. Then, does depth alone create bad local minima? In this paper, we prove that without nonlinearity, depth alone does not create bad local minima, although it induces non-convex loss surface. Using this insight, we greatly simplify a recently proposed proof to show that all of the local minima of feedforward deep linear neural networks are global minima. Our theoretical results generalize previous results with fewer assumptions, and this analysis provides a method to show similar results beyond square loss in deep linear models.",
"title": ""
}
] | [
{
"docid": "e839c6a8c5efcd50f96521238c96a5d3",
"text": "To improve the accuracy of lane detection in complex scenarios, an adaptive lane feature learning algorithm which can automatically learn the features of a lane in various scenarios is proposed. First, a two-stage learning network based on the YOLO v3 (You Only Look Once, v3) is constructed. The structural parameters of the YOLO v3 algorithm are modified to make it more suitable for lane detection. To improve the training efficiency, a method for automatic generation of the lane label images in a simple scenario, which provides label data for the training of the first-stage network, is proposed. Then, an adaptive edge detection algorithm based on the Canny operator is used to relocate the lane detected by the first-stage model. Furthermore, the unrecognized lanes are shielded to avoid interference in subsequent model training. Then, the images processed by the above method are used as label data for the training of the second-stage model. The experiment was carried out on the KITTI and Caltech datasets, and the results showed that the accuracy and speed of the second-stage model reached a high level.",
"title": ""
},
{
"docid": "8108c37cc3f3160c78252fcfbeb8d2f2",
"text": "It is well understood that the pancreas has two distinct roles: the endocrine and exocrine functions, that are functionally and anatomically closely related. As specialists in diabetes care, we are adept at managing pancreatic endocrine failure and its associated complications. However, there is frequent overlap and many patients with diabetes also suffer from exocrine insufficiency. Here we outline the different causes of exocrine failure, and in particular that associated with type 1 and type 2 diabetes and how this differs from diabetes that is caused by pancreatic exocrine disease: type 3c diabetes. Copyright © 2017 John Wiley & Sons. Practical Diabetes 2017; 34(6): 200–204",
"title": ""
},
{
"docid": "64e57a5382411ade7c0ad4ef7f094aa9",
"text": "In this paper we present the techniques used for the University of Montréal's team submissions to the 2013 Emotion Recognition in the Wild Challenge. The challenge is to classify the emotions expressed by the primary human subject in short video clips extracted from feature length movies. This involves the analysis of video clips of acted scenes lasting approximately one-two seconds, including the audio track which may contain human voices as well as background music. Our approach combines multiple deep neural networks for different data modalities, including: (1) a deep convolutional neural network for the analysis of facial expressions within video frames; (2) a deep belief net to capture audio information; (3) a deep autoencoder to model the spatio-temporal information produced by the human actions depicted within the entire scene; and (4) a shallow network architecture focused on extracted features of the mouth of the primary human subject in the scene. We discuss each of these techniques, their performance characteristics and different strategies to aggregate their predictions. Our best single model was a convolutional neural network trained to predict emotions from static frames using two large data sets, the Toronto Face Database and our own set of faces images harvested from Google image search, followed by a per frame aggregation strategy that used the challenge training data. This yielded a test set accuracy of 35.58%. Using our best strategy for aggregating our top performing models into a single predictor we were able to produce an accuracy of 41.03% on the challenge test set. These compare favorably to the challenge baseline test set accuracy of 27.56%.",
"title": ""
},
{
"docid": "c0c7752c6b9416e281c3649e70f9daae",
"text": "Although the study of clustering is centered around an intuitively compelling goal, it has been very difficult to develop a unified framework for reasoning about it at a technical level, and profoundly diverse approaches to clustering abound in the research community. Here we suggest a formal perspective on the difficulty in finding such a unification, in the form of an impossibility theorem: for a set of three simple properties, we show that there is no clustering function satisfying all three. Relaxations of these properties expose some of the interesting (and unavoidable) trade-offs at work in well-studied clustering techniques such as single-linkage, sum-of-pairs, k-means, and k-median.",
"title": ""
},
{
"docid": "b732824ec9677b639e34de68818aae50",
"text": "Although there is wide agreement that backfilling produces significant benefits in scheduling of parallel jobs, there is no clear consensus on which backfilling strategy is preferable e.g. should conservative backfilling be used or the more aggressive EASY backfilling scheme; should a First-Come First-Served(FCFS) queue-priority policy be used, or some other such as Shortest job First(SF) or eXpansion Factor(XF); In this paper, we use trace-based simulation to address these questions and glean new insights into the characteristics of backfilling strategies for job scheduling. We show that by viewing performance in terms of slowdowns and turnaround times of jobs within various categories based on their width (processor request size), length (job duration) and accuracy of the user’s estimate of run time, some consistent trends may be observed.",
"title": ""
},
{
"docid": "d08c24228e43089824357342e0fa0843",
"text": "This paper presents a new register assignment heuristic for procedures in SSA Form, whose interference graphs are chordal; the heuristic is called optimistic chordal coloring (OCC). Previous register assignment heuristics eliminate copy instructions via coalescing, in other words, merging nodes in the interference graph. Node merging, however, can not preserve the chordal graph property, making it unappealing for SSA-based register allocation. OCC is based on graph coloring, but does not employ coalescing, and, consequently, preserves graph chordality, and does not increase its chromatic number; in this sense, OCC is conservative as well as optimistic. OCC is observed to eliminate at least as many dynamically executed copy instructions as iterated register coalescing (IRC) for a set of chordal interference graphs generated from several Mediabench and MiBench applications. In many cases, OCC and IRC were able to find optimal or near-optimal solutions for these graphs. OCC ran 1.89x faster than IRC, on average.",
"title": ""
},
{
"docid": "f3ca98a8e0600f0c80ef539cfc58e77e",
"text": "In this paper, we address a real life waste collection vehicle routing problem with time windows (VRPTW) with consideration of multiple disposal trips and drivers’ lunch breaks. Solomon’s well-known insertion algorithm is extended for the problem. While minimizing the number of vehicles and total traveling time is the major objective of vehicle routing problems in the literature, here we also consider the route compactness and workload balancing of a solution since they are very important aspects in practical applications. In order to improve the route compactness and workload balancing, a capacitated clustering-based waste collection VRPTW algorithm is developed. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems at Waste Management, Inc. A set of waste collection VRPTW benchmark problems is also presented in this paper. Waste collection problems are frequently considered as arc routing problems without time windows. However, that point of view can be applied only to residential waste collection problems. In the waste collection industry, there are three major areas: commercial waste collection, residential waste collection and roll-on-roll-off. In this paper, we mainly focus on the commercial waste collection problem. The problem can be characterized as a variant of VRPTW since commercial waste collection stops may have time windows. The major variation from a standard VRPTW is due to disposal operations and driver’s lunch break. When a vehicle is full, it needs to go to one of the disposal facilities (landfill or transfer station). Each vehicle can, and typically does, make multiple disposal trips per day. The purpose of this paper is to introduce the waste collection VRPTW, benchmark problem sets, and a solution approach for the problem. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems of Waste Management, the leading provider of comprehensive waste management services in North America with nearly 26,000 collection and transfer vehicles. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "169db6ecec2243e3566079cd473c7afe",
"text": "Aspect-level sentiment classification is a finegrained task in sentiment analysis. Since it provides more complete and in-depth results, aspect-level sentiment analysis has received much attention these years. In this paper, we reveal that the sentiment polarity of a sentence is not only determined by the content but is also highly related to the concerned aspect. For instance, “The appetizers are ok, but the service is slow.”, for aspect taste, the polarity is positive while for service, the polarity is negative. Therefore, it is worthwhile to explore the connection between an aspect and the content of a sentence. To this end, we propose an Attention-based Long Short-Term Memory Network for aspect-level sentiment classification. The attention mechanism can concentrate on different parts of a sentence when different aspects are taken as input. We experiment on the SemEval 2014 dataset and results show that our model achieves state-ofthe-art performance on aspect-level sentiment classification.",
"title": ""
},
{
"docid": "d0778852e57dddf8a454dd609908ff87",
"text": "Abstract: Trivariate barycentric coordinates can be used both to express a point inside a tetrahedron as a convex combination of the four vertices and to linearly interpolate data given at the vertices. In this paper we generalize these coordinates to convex polyhedra and the kernels of star-shaped polyhedra. These coordinates generalize in a natural way a recently constructed set of coordinates for planar polygons, called mean value coordinates.",
"title": ""
},
{
"docid": "d3d57d67d4384f916f9e9e48f3fcdcdb",
"text": "Web-based social networks have become popular as a medium for disseminating information and connecting like-minded people. The public accessibility of such networks with the ability to share opinions, thoughts, information, and experience offers great promise to enterprises and governments. In addition to individuals using such networks to connect to their friends and families, governments and enterprises have started exploiting these platforms for delivering their services to citizens and customers. However, the success of such attempts relies on the level of trust that members have with each other as well as with the service provider. Therefore, trust becomes an essential and important element of a successful social network. In this article, we present the first comprehensive review of social and computer science literature on trust in social networks. We first review the existing definitions of trust and define social trust in the context of social networks. We then discuss recent works addressing three aspects of social trust: trust information collection, trust evaluation, and trust dissemination. Finally, we compare and contrast the literature and identify areas for further research in social trust.",
"title": ""
},
{
"docid": "46d46b2043019ad33e392d2d0a4b4d0d",
"text": "Ambient assisted living (AAL) is focused on providing assistance to people primarily in their natural environment. Over the past decade, the AAL domain has evolved at a fast pace in various directions. The stakeholders of AAL are not only limited to patients, but also include their relatives, social services, health workers, and care agencies. In fact, AAL aims at increasing the life quality of patients, their relatives and the health care providers with a holistic approach. This paper aims at providing a comprehensive overview of the AAL domain, presenting a systematic analysis of over 10 years of relevant literature focusing on the stakeholders’ needs, bridging the gap of existing reviews which focused on technologies. The findings of this review clearly show that until now the AAL domain neglects the view of the entire AAL ecosystem. Furthermore, the proposed solutions seem to be tailored more on the basis of the available existing technologies, rather than supporting the various stakeholders’ needs. Another major lack that this review is pointing out is a missing adequate evaluation of the various solutions. Finally, it seems that, as the domain of AAL is pretty new, it is still in its incubation phase. Thus, this review calls for moving the AAL domain to a more mature phase with respect to the research approaches.",
"title": ""
},
{
"docid": "18aa08888e4b2b412f154e47891b034d",
"text": "Roughly 1.3 billion people in developing countries still live without access to reliable electricity. As expanding access using current technologies will accelerate global climate change, there is a strong need for novel solutions that displace fossil fuels and are financially viable for developing regions. A novel DC microgrid solution that is geared at maximizing efficiency and reducing system installation cost is described in this paper. Relevant simulation and experimental results, as well as a proposal for undertaking field-testing of the technical and economic viability of the microgrid system are presented.",
"title": ""
},
{
"docid": "5bd68ea9ec37f954b2544e65cfff5626",
"text": "To improve ATMs’ cash demand forecasts, this paper advocates the prediction of cash demand for groups of ATMs with similar day-of-the week cash demand patterns. We first clustered ATM centers into ATM clusters having similar day-of-the week withdrawal patterns. To retrieve “day-of-the-week” withdrawal seasonality parameters (effect of a Monday, etc) we built a time series model for each ATMs. For clustering, the succession of 7 continuous daily withdrawal seasonality parameters of ATMs is discretized. Next, the similarity between the different ATMs’ discretized daily withdrawal seasonality sequence is measured by the Sequence Alignment Method (SAM). For each cluster of ATMs, four neural networks viz., general regression neural network (GRNN), multi layer feed forward neural network (MLFF), group method of data handling (GMDH) and wavelet neural network (WNN) are built to predict an ATM center’s cash demand. The proposed methodology is applied on the NN5 competition dataset. We observed that GRNN yielded the best result of 18.44% symmetric mean absolute percentage error (SMAPE), which is better than the result of Andrawis et al. (2011). This is due to clustering followed by a forecasting phase. Further, the proposed approach yielded much smaller SMAPE values than the approach of direct prediction on the entire sample without clustering. From a managerial perspective, the clusterwise cash demand forecast helps the bank’s top management to design similar cash replenishment plans for all the ATMs in the same cluster. This cluster-level replenishment plans could result in saving huge operational costs for ATMs operating in a similar geographical region.",
"title": ""
},
{
"docid": "335fbbf27b34e3937c2f6772b3227d51",
"text": "WordNet has facilitated important research in natural language processing but its usefulness is somewhat limited by its relatively small lexical coverage. The Paraphrase Database (PPDB) covers 650 times more words, but lacks the semantic structure of WordNet that would make it more directly useful for downstream tasks. We present a method for mapping words from PPDB to WordNet synsets with 89% accuracy. The mapping also lays important groundwork for incorporating WordNet’s relations into PPDB so as to increase its utility for semantic reasoning in applications.",
"title": ""
},
{
"docid": "c74b93fff768f024b921fac7f192102d",
"text": "Motivated by information-theoretic considerations, we pr opose a signalling scheme, unitary spacetime modulation, for multiple-antenna communication links. This modulati on s ideally suited for Rayleigh fast-fading environments, since it does not require the rec iv r to know or learn the propagation coefficients. Unitary space-time modulation uses constellations of T M space-time signals f `; ` = 1; : : : ; Lg, whereT represents the coherence interval during which the fading i s approximately constant, and M < T is the number of transmitter antennas. The columns of each ` are orthonormal. When the receiver does not know the propagation coefficients, which between pa irs of transmitter and receiver antennas are modeled as statistically independent, this modulation per forms very well either when the SNR is high or whenT M . We design some multiple-antenna signal constellations and simulate their effectiveness as measured by bit error probability with maximum likelihood decoding. We demonstrate that two antennas have a 6 dB diversity gain over one antenna at 15 dB SNR. Index Terms —Multi-element antenna arrays, wireless communications, channel coding, fading channels, transmitter and receiver diversity, space-time modu lation",
"title": ""
},
{
"docid": "ef8be5104f9bc4a0f4353ed236b6afb8",
"text": "State-of-the-art human pose estimation methods are based on heat map representation. In spite of the good performance, the representation has a few issues in nature, such as non-differentiable postprocessing and quantization error. This work shows that a simple integral operation relates and unifies the heat map representation and joint regression, thus avoiding the above issues. It is differentiable, efficient, and compatible with any heat map based methods. Its effectiveness is convincingly validated via comprehensive ablation experiments under various settings, specifically on 3D pose estimation, for the first time.",
"title": ""
},
{
"docid": "5f54125c0114f4fadc055e721093a49e",
"text": "In this study, a fuzzy logic based autonomous vehicle control system is designed and tested in The Open Racing Car Simulator (TORCS) environment. The aim of this study is that vehicle complete the race without to get any damage and to get out of the way. In this context, an intelligent control system composed of fuzzy logic and conventional control structures has been developed such that the racing car is able to compete the race autonomously. In this proposed structure, once the vehicle's gearshifts have been automated, a fuzzy logic based throttle/brake control system has been designed such that the racing car is capable to accelerate/decelerate in a realistic manner as well as to drive at desired velocity. The steering control problem is also handled to end up with a racing car that is capable to travel on the road even in the presence of sharp curves. In this context, we have designed a fuzzy logic based positioning system that uses the knowledge of the curvature ahead to determine an appropriate position. The game performance of the developed fuzzy logic systems can be observed from https://youtu.be/qOvEz3-PzRo.",
"title": ""
},
{
"docid": "f8a5fb5f323f036d38959f97815337a5",
"text": "OBJECTIVE\nEarly screening of autism increases the chance of receiving timely intervention. Using the Parent Report Questionnaires is effective in screening autism. The Q-CHAT is a new instrument that has shown several advantages than other screening tools. Because there is no adequate tool for the early screening of autistic traits in Iranian children, we aimed to investigate the adequacy of the Persian translation of Q-CHAT.\n\n\nMETHOD\nAt first, we prepared the Persian translation of the Quantitative Checklist for Autism in Toddlers (Q-CHAT). After that, an appropriate sample was selected and the check list was administered. Our sample included 100 children in two groups (typically developing and autistic children) who had been selected conveniently. Pearson's r was used to determine test-retest reliability, and Cronbach's alpha coefficient was used to explore the internal consistency of Q-CHAT. We used the receiver operating characteristics curve (ROC) to investigate whether Q-CHAT can adequately discriminate between typically developing and ASD children or not. Data analysis was carried out by SPSS 19.\n\n\nRESULT\nThe typically developing group consisted of 50 children with the mean age of 27.14 months, and the ASD group included50 children with the mean age of 29.62 months. The mean of the total score for the typically developing group was 22.4 (SD=6.26) on Q-CHAT and it was 50.94 (SD=12.35) for the ASD group, which was significantly different (p=0.00).The Cronbach's alpha coefficient of the checklist was 0.886, and test-retest reliability was calculated as 0.997 (p<0.01). The estimated area under the curve (AUC) was 0.971. It seems that the total score equal to 30 can be a good cut point to identify toddlers who are at risk of autism (sensitivity= 0.96 and specificity= 0.90).\n\n\nCONCLUSION\nThe Persian translation of Q-CHAT has good reliability and predictive validity and can be used as a screening tool to detect 18 to 24 months old children who are at risk of autism.",
"title": ""
},
{
"docid": "db806183810547435075eb6edd28d630",
"text": "Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues.,,We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how the Tucker decomposition framework generalizes some of the latest VQA architectures, providing state-of-the-art results.",
"title": ""
},
{
"docid": "4fd8eb1c592960a0334959fcd74f00d8",
"text": "Automatic grammatical error detection for Chinese has been a big challenge for NLP researchers. Due to the formal and strict grammar rules in Chinese, it is hard for foreign students to master Chinese. A computer-assisted learning tool which can automatically detect and correct Chinese grammatical errors is necessary for those foreign students. Some of the previous works have sought to identify Chinese grammatical errors using templateand learning-based methods. In contrast, this study introduced convolutional neural network (CNN) and long-short term memory (LSTM) for the shared task of Chinese Grammatical Error Diagnosis (CGED). Different from traditional word-based embedding, single word embedding was used as input of CNN and LSTM. The proposed single word embedding can capture both semantic and syntactic information to detect those four type grammatical error. In experimental evaluation, the recall and f1-score of our submitted results Run1 of the TOCFL testing data ranked the fourth place in all submissions in detection-level.",
"title": ""
}
] | scidocsrr |
56bff8526270ff83758c75bc68eb1666 | Development of a cloud-based RTAB-map service for robots | [
{
"docid": "82835828a7f8c073d3520cdb4b6c47be",
"text": "Simultaneous Localization and Mapping (SLAM) for mobile robots is a computationally expensive task. A robot capable of SLAM needs a powerful onboard computer, but this can limit the robot's mobility because of weight and power demands. We consider moving this task to a remote compute cloud, by proposing a general cloud-based architecture for real-time robotics computation, and then implementing a Rao-Blackwellized Particle Filtering-based SLAM algorithm in a multi-node cluster in the cloud. In our implementation, expensive computations are executed in parallel, yielding significant improvements in computation time. This allows the algorithm to increase the complexity and frequency of calculations, enhancing the accuracy of the resulting map while freeing the robot's onboard computer for other tasks. Our method for implementing particle filtering in the cloud is not specific to SLAM and can be applied to other computationally-intensive tasks.",
"title": ""
}
] | [
{
"docid": "3eaa3a1a3829345aaa597cf843f720d6",
"text": "Relationship science is a theory-rich discipline, but there have been no attempts to articulate the broader themes or principles that cut across the theories themselves. We have sought to fill that void by reviewing the psychological literature on close relationships, particularly romantic relationships, to extract its core principles. This review reveals 14 principles, which collectively address four central questions: (a) What is a relationship? (b) How do relationships operate? (c) What tendencies do people bring to their relationships? (d) How does the context affect relationships? The 14 principles paint a cohesive and unified picture of romantic relationships that reflects a strong and maturing discipline. However, the principles afford few of the sorts of conflicting predictions that can be especially helpful in fostering novel theory development. We conclude that relationship science is likely to benefit from simultaneous pushes toward both greater integration across theories (to reduce redundancy) and greater emphasis on the circumstances under which existing (or not-yet-developed) principles conflict with one another.",
"title": ""
},
{
"docid": "5de11e0cbfce77414d1c552007d63892",
"text": "© 2012 Cassisi et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Similarity Measures and Dimensionality Reduction Techniques for Time Series Data Mining",
"title": ""
},
{
"docid": "0d5ba680571a9051e70ababf0c685546",
"text": "• Current deep RL techniques require large amounts of data to find a good policy • Once found, the policy remains a black box to practitioners • Practitioners cannot verify that the policy is making decisions based on reasonable information • MOREL (Motion-Oriented REinforcement Learning) automatically detects moving objects and uses the relevant information for action selection • We gather a dataset using a uniform random policy • Train a network without supervision to capture a structured representation of motion between frames • Network predicts object masks, object motion, and camera motion to warp one frame into the next Introduction Learning to Segment Moving Objects Experiments Visualization",
"title": ""
},
{
"docid": "6e675e8a57574daf83ab78cea25688f5",
"text": "Collecting quality data from software projects can be time-consuming and expensive. Hence, some researchers explore âunsupervisedâ approaches to quality prediction that does not require labelled data. An alternate technique is to use âsupervisedâ approaches that learn models from project data labelled with, say, âdefectiveâ or ânot-defectiveâ. Most researchers use these supervised models since, it is argued, they can exploit more knowledge of the projects. \nAt FSEâ16, Yang et al. reported startling results where unsupervised defect predictors outperformed supervised predictors for effort-aware just-in-time defect prediction. If confirmed, these results would lead to a dramatic simplification of a seemingly complex task (data mining) that is widely explored in the software engineering literature. \nThis paper repeats and refutes those results as follows. (1) There is much variability in the efficacy of the Yang et al. predictors so even with their approach, some supervised data is required to prune weaker predictors away. (2) Their findings were grouped across N projects. When we repeat their analysis on a project-by-project basis, supervised predictors are seen to work better. \nEven though this paper rejects the specific conclusions of Yang et al., we still endorse their general goal. In our our experiments, supervised predictors did not perform outstandingly better than unsupervised ones for effort-aware just-in-time defect prediction. Hence, they may indeed be some combination of unsupervised learners to achieve comparable performance to supervised ones. We therefore encourage others to work in this promising area.",
"title": ""
},
{
"docid": "264c63f249f13bf3eb4fd5faac8f4fa0",
"text": "This paper presents the study to investigate the possibility of the stand-alone micro hydro for low-cost electricity production which can satisfy the energy load requirements of a typical remote and isolated rural area. In this framework, the feasibility study in term of the technical and economical performances of the micro hydro system are determined according to the rural electrification concept. The proposed axial flux permanent magnet (AFPM) generator will be designed for micro hydro under sustainable development to optimize between cost and efficiency by using the local materials and basic engineering knowledge. First of all, the simple simulation of micro hydro model for lighting system is developed by considering the optimal size of AFPM generator. The simulation results show that the optimal micro hydro power plant with 70 W can supply the 9 W compact fluorescent up to 20 set for 8 hours by using pressure of water with 6 meters and 0.141 m3/min of flow rate. Lastly, a proposed micro hydro power plant can supply lighting system for rural electrification up to 525.6 kWh/year or 1,839.60 Baht/year and reduce 0.33 ton/year of CO2 emission.",
"title": ""
},
{
"docid": "bf57a5fcf6db7a9b26090bd9a4b65784",
"text": "Plate osteosynthesis is still recognized as the treatment of choice for most articular fractures, many metaphyseal fractures, and certain diaphyseal fractures such as in the forearm. Since the 1960s, both the techniques and implants used for internal fixation with plates have evolved to provide for improved healing. Most recently, plating methods have focused on the principles of 'biological fixation'. These methods attempt to preserve the blood supply to improve the rate of fracture healing, decrease the need for bone grafting, and decrease the incidence of infection and re-fracture. The purpose of this article is to provide a brief overview of the history of plate osteosynthesis as it relates to the development of the latest minimally invasive surgical techniques.",
"title": ""
},
{
"docid": "fe95e139aab1453750224bd856059fcf",
"text": "IMPORTANCE\nChronic sinusitis is a common inflammatory condition defined by persistent symptomatic inflammation of the sinonasal cavities lasting longer than 3 months. It accounts for 1% to 2% of total physician encounters and is associated with large health care expenditures. Appropriate use of medical therapies for chronic sinusitis is necessary to optimize patient quality of life (QOL) and daily functioning and minimize the risk of acute inflammatory exacerbations.\n\n\nOBJECTIVE\nTo summarize the highest-quality evidence on medical therapies for adult chronic sinusitis and provide an evidence-based approach to assist in optimizing patient care.\n\n\nEVIDENCE REVIEW\nA systematic review searched Ovid MEDLINE (1947-January 30, 2015), EMBASE, and Cochrane Databases. The search was limited to randomized clinical trials (RCTs), systematic reviews, and meta-analyses. Evidence was categorized into maintenance and intermittent or rescue therapies and reported based on the presence or absence of nasal polyps.\n\n\nFINDINGS\nTwenty-nine studies met inclusion criteria: 12 meta-analyses (>60 RCTs), 13 systematic reviews, and 4 RCTs that were not included in any of the meta-analyses. Saline irrigation improved symptom scores compared with no treatment (standardized mean difference [SMD], 1.42 [95% CI, 1.01 to 1.84]; a positive SMD indicates improvement). Topical corticosteroid therapy improved overall symptom scores (SMD, -0.46 [95% CI, -0.65 to -0.27]; a negative SMD indicates improvement), improved polyp scores (SMD, -0.73 [95% CI, -1.0 to -0.46]; a negative SMD indicates improvement), and reduced polyp recurrence after surgery (relative risk, 0.59 [95% CI, 0.45 to 0.79]). Systemic corticosteroids and oral doxycycline (both for 3 weeks) reduced polyp size compared with placebo for 3 months after treatment (P < .001). Leukotriene antagonists improved nasal symptoms compared with placebo in patients with nasal polyps (P < .01). Macrolide antibiotic for 3 months was associated with improved QOL at a single time point (24 weeks after therapy) compared with placebo for patients without polyps (SMD, -0.43 [95% CI, -0.82 to -0.05]).\n\n\nCONCLUSIONS AND RELEVANCE\nEvidence supports daily high-volume saline irrigation with topical corticosteroid therapy as a first-line therapy for chronic sinusitis. A short course of systemic corticosteroids (1-3 weeks), short course of doxycycline (3 weeks), or a leukotriene antagonist may be considered in patients with nasal polyps. A prolonged course (3 months) of macrolide antibiotic may be considered for patients without polyps.",
"title": ""
},
{
"docid": "983ec9cdd75d0860c96f89f3c9b2f752",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "db822d9deda1a707b6e6385c79aa93e2",
"text": "We propose simple tangible language elements for very young children to use when constructing programmes. The equivalent Turtle Talk instructions are given for comparison. Two examples of the tangible language code are shown to illustrate alternative methods of solving a given challenge.",
"title": ""
},
{
"docid": "980dc3d4b01caac3bf56df039d5ca513",
"text": "In this paper, we study object detection using a large pool of unlabeled images and only a few labeled images per category, named \"few-example object detection\". The key challenge consists in generating trustworthy training samples as many as possible from the pool. Using few training examples as seeds, our method iterates between model training and high-confidence sample selection. In training, easy samples are generated first and, then the poorly initialized model undergoes improvement. As the model becomes more discriminative, challenging but reliable samples are selected. After that, another round of model improvement takes place. To further improve the precision and recall of the generated training samples, we embed multiple detection models in our framework, which has proven to outperform the single model baseline and the model ensemble method. Experiments on PASCAL VOC'07, MS COCO'14, and ILSVRC'13 indicate that by using as few as three or four samples selected for each category, our method produces very competitive results when compared to the state-of-the-art weakly-supervised approaches using a large number of image-level labels.",
"title": ""
},
{
"docid": "c62bc7391e55d66c9e27befe81446ebe",
"text": "Opaque predicates have been widely used to insert superfluous branches for control flow obfuscation. Opaque predicates can be seamlessly applied together with other obfuscation methods such as junk code to turn reverse engineering attempts into arduous work. Previous efforts in detecting opaque predicates are far from mature. They are either ad hoc, designed for a specific problem, or have a considerably high error rate. This paper introduces LOOP, a Logic Oriented Opaque Predicate detection tool for obfuscated binary code. Being different from previous work, we do not rely on any heuristics; instead we construct general logical formulas, which represent the intrinsic characteristics of opaque predicates, by symbolic execution along a trace. We then solve these formulas with a constraint solver. The result accurately answers whether the predicate under examination is opaque or not. In addition, LOOP is obfuscation resilient and able to detect previously unknown opaque predicates. We have developed a prototype of LOOP and evaluated it with a range of common utilities and obfuscated malicious programs. Our experimental results demonstrate the efficacy and generality of LOOP. By integrating LOOP with code normalization for matching metamorphic malware variants, we show that LOOP is an appealing complement to existing malware defenses.",
"title": ""
},
{
"docid": "0b08e657d012d26310c88e2129c17396",
"text": "In order to accurately determine the growth of greenhouse crops, the system based on AVR Single Chip microcontroller and wireless sensor networks is developed, it transfers data through the wireless transceiver devices without setting up electric wiring, the system structure is simple. The monitoring and management center can control the temperature and humidity of the greenhouse, measure the carbon dioxide content, and collect the information about intensity of illumination, and so on. In addition, the system adopts multilevel energy memory. It combines energy management with energy transfer, which makes the energy collected by solar energy batteries be used reasonably. Therefore, the self-managing energy supply system is established. The system has advantages of low power consumption, low cost, good robustness, extended flexible. An effective tool is provided for monitoring and analysis decision-making of the greenhouse environment.",
"title": ""
},
{
"docid": "7c8d1b0c77acb4fd6db6e7f887e66133",
"text": "Subdural hematomas (SDH) in infants often result from nonaccidental head injury (NAHI), which is diagnosed based on the absence of history of trauma and the presence of associated lesions. When these are lacking, the possibility of spontaneous SDH in infant (SSDHI) is raised, but this entity is hotly debated; in particular, the lack of positive diagnostic criteria has hampered its recognition. The role of arachnoidomegaly, idiopathic macrocephaly, and dehydration in the pathogenesis of SSDHI is also much discussed. We decided to analyze apparent cases of SSDHI from our prospective databank. We selected cases of SDH in infants without systemic disease, history of trauma, and suspicion of NAHI. All cases had fundoscopy and were evaluated for possible NAHI. Head growth curves were reconstructed in order to differentiate idiopathic from symptomatic macrocrania. Sixteen patients, 14 males and two females, were diagnosed with SSDHI. Twelve patients had idiopathic macrocrania, seven of these being previously diagnosed with arachnoidomegaly on imaging. Five had risk factors for dehydration, including two with severe enteritis. Two patients had mild or moderate retinal hemorrhage, considered not indicative of NAHI. Thirteen patients underwent cerebrospinal fluid drainage. The outcome was favorable in almost all cases; one child has sequels, which were attributable to obstetrical difficulties. SSDHI exists but is rare and cannot be diagnosed unless NAHI has been questioned thoroughly. The absence of traumatic features is not sufficient, and positive elements like macrocrania, arachnoidomegaly, or severe dehydration are necessary for the diagnosis of SSDHI.",
"title": ""
},
{
"docid": "0ad4432a79ea6b3eefbe940adf55ff7b",
"text": "This study reviews the long-term outcome of prostheses and fixtures (implants) in 759 totally edentulous jaws of 700 patients. A total of 4,636 standard fixtures were placed and followed according to the osseointegration method for a maximum of 24 years by the original team at the University of Göteborg. Standardized annual clinical and radiographic examinations were conducted as far as possible. A lifetable approach was applied for statistical analysis. Sufficient numbers of fixtures and prostheses for a detailed statistical analysis were present for observation times up to 15 years. More than 95% of maxillae had continuous prosthesis stability at 5 and 10 years, and at least 92% at 15 years. The figure for mandibles was 99% at all time intervals. Calculated from the time of fixture placement, the estimated survival rates for individual fixtures in the maxilla were 84%, 89%, and 92% at 5 years; 81% and 82% at 10 years; and 78% at 15 years. In the mandible they were 91%, 98%, and 99% at 5 years; 89% and 98% at 10 years; and 86% at 15 years. (The different percentages at 5 and 10 years refer to results for different routine groups of fixtures with 5 to 10, 10 to 15, and 1 to 5 years of observation time, respectively.) The results of this study concur with multicenter and earlier results for the osseointegration method.",
"title": ""
},
{
"docid": "2b688f9ca05c2a79f896e3fee927cc0d",
"text": "This paper presents a new synchronous-reference frame (SRF)-based control method to compensate power-quality (PQ) problems through a three-phase four-wire unified PQ conditioner (UPQC) under unbalanced and distorted load conditions. The proposed UPQC system can improve the power quality at the point of common coupling on power distribution systems under unbalanced and distorted load conditions. The simulation results based on Matlab/Simulink are discussed in detail to support the SRF-based control method presented in this paper. The proposed approach is also validated through experimental study with the UPQC hardware prototype.",
"title": ""
},
{
"docid": "2a7983e91cd674d95524622e82c4ded7",
"text": "• FC (fully-connected) layer takes the pooling results, produces features FROI, Fcontext, Fframe, and feeds them into two streams, inspired by [BV16]. • Classification stream produces a matrix of classification scores S = [FCcls(FROI1); . . . ;FCcls(FROIK)] ∈ RK×C • Localization stream implements the proposed context-aware guidance that uses FROIk, Fcontextk, Fframek to produce a localization score matrix L ∈ RK×C.",
"title": ""
},
{
"docid": "9e208a394475931aafdcdfbad1408489",
"text": "Ocular complications following cosmetic filler injections are serious situations. This study provided scientific evidence that filler in the facial and the superficial temporal arteries could enter into the orbits and the globes on both sides. We demonstrated the existence of an embolic channel connecting the arterial system of the face to the ophthalmic artery. After the removal of the ocular contents from both eyes, liquid dye was injected into the cannulated channel of the superficial temporal artery in six soft embalmed cadavers and different color dye was injected into the facial artery on both sides successively. The interior sclera was monitored for dye oozing from retrograde ophthalmic perfusion. Among all 12 globes, dye injections from the 12 superficial temporal arteries entered ipsilateral globes in three and the contralateral globe in two arteries. Dye from the facial artery was infused into five ipsilateral globes and in three contralateral globes. Dye injections of two facial arteries in the same cadaver resulted in bilateral globe staining but those of the superficial temporal arteries did not. Direct communications between the same and different arteries of the four cannulated arteries were evidenced by dye dripping from the cannulating needle hubs in 14 of 24 injected arteries. Compression of the orbital rim at the superior nasal corner retarded ocular infusion in 11 of 14 arterial injections. Under some specific conditions favoring embolism, persistent interarterial anastomoses between the face and the eye allowed filler emboli to flow into the globe causing ocular complications. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .",
"title": ""
},
{
"docid": "db3c5c93daf97619ad927532266b3347",
"text": "Car9, a dodecapeptide identified by cell surface display for its ability to bind to the edge of carbonaceous materials, also binds to silica with high affinity. The interaction can be disrupted with l-lysine or l-arginine, enabling a broad range of technological applications. Previously, we reported that C-terminal Car9 extensions support efficient protein purification on underivatized silica. Here, we show that the Car9 tag is functional and TEV protease-excisable when fused to the N-termini of target proteins, and that it supports affinity purification under denaturing conditions, albeit with reduced yields. We further demonstrate that capture of Car9-tagged proteins is enhanced on small particle size silica gels with large pores, that the concomitant problem of nonspecific protein adsorption can be solved by lysing cells in the presence of 0.3% Tween 20, and that efficient elution is achieved at reduced l-lysine concentrations under alkaline conditions. An optimized small-scale purification kit incorporating the above features allows Car9-tagged proteins to be inexpensively recovered in minutes with better than 90% purity. The Car9 affinity purification technology should prove valuable for laboratory-scale applications requiring rapid access to milligram-quantities of proteins, and for preparative scale purification schemes where cost and productivity are important factors.",
"title": ""
},
{
"docid": "3f207c3c622d1854a7ad6c5365354db1",
"text": "The field of Music Information Retrieval has always acknowledged the need for rigorous scientific evaluations, and several efforts have set out to develop and provide the infrastructure, technology and methodologies needed to carry out these evaluations. The community has enormously gained from these evaluation forums, but we have reached a point where we are stuck with evaluation frameworks that do not allow us to improve as much and as well as we want. The community recently acknowledged this problem and showed interest in addressing it, though it is not clear what to do to improve the situation. We argue that a good place to start is again the Text IR field. Based on a formalization of the evaluation process, this paper presents a survey of past evaluation work in the context of Text IR, from the point of view of validity, reliability and efficiency of the experiments. We show the problems that our community currently has in terms of evaluation, point to several lines of research to improve it and make various proposals in that line.",
"title": ""
},
{
"docid": "84b018fa45e06755746309014854bb9a",
"text": "For years, ontologies have been known in computer science as consensual models of domains of discourse, usually implemented as formal definitions of the relevant conceptual entities. Researchers have written much about the potential benefits of using them, and most of us regard ontologies as central building blocks of the semantic Web and other semantic systems. Unfortunately, the number and quality of actual, \"non-toy\" ontologies available on the Web today is remarkably low. This implies that the semantic Web community has yet to build practically useful ontologies for a lot of relevant domains in order to make the semantic Web a reality. Theoretically minded advocates often assume that the lack of ontologies is because the \"stupid business people haven't realized ontologies' enormous benefits.\" As a liberal market economist, the author assumes that humans can generally figure out what's best for their well-being, at least in the long run, and that they act accordingly. In other words, the fact that people haven't yet created as many useful ontologies as the ontology research community would like might indicate either unresolved technical limitations or the existence of sound rationales for why individuals refrain from building them - or both. Indeed, several social and technical difficulties exist that put a brake on developing and eventually constrain the space of possible ontologies",
"title": ""
}
] | scidocsrr |
d0c87d798ac1ff9a5a967e9dcefe81f7 | Chinese Preposition Selection for Grammatical Error Diagnosis | [
{
"docid": "aa80366addac8af9cc5285f98663b9b6",
"text": "Automatic detection of sentence errors is an important NLP task and is valuable to assist foreign language learners. In this paper, we investigate the problem of word ordering errors in Chinese sentences and propose classifiers to detect this type of errors. Word n-gram features in Google Chinese Web 5-gram corpus and ClueWeb09 corpus, and POS features in the Chinese POStagged ClueWeb09 corpus are adopted in the classifiers. The experimental results show that integrating syntactic features, web corpus features and perturbation features are useful for word ordering error detection, and the proposed classifier achieves 71.64% accuracy in the experimental datasets. 協助非中文母語學習者偵測中文句子語序錯誤 自動偵測句子錯誤是自然語言處理研究一項重要議題,對於協助外語學習者很有價值。在 這篇論文中,我們研究中文句子語序錯誤的問題,並提出分類器來偵測這種類型的錯誤。 在分類器中我們使用的特徵包括:Google 中文網路 5-gram 語料庫、與 ClueWeb09 語料庫 的中文詞彙 n-grams及中文詞性標注特徵。實驗結果顯示,整合語法特徵、網路語料庫特 徵、及擾動特徵對偵測中文語序錯誤有幫助。在實驗所用的資料集中,合併使用這些特徵 所得的分類器效能可達 71.64%。",
"title": ""
}
] | [
{
"docid": "ab157111a39a4f081bdf0126e869f65d",
"text": "As event-related brain potential (ERP) researchers have increased the number of recording sites, they have gained further insights into the electrical activity in the neural networks underlying explicit memory. A review of the results of such ERP mapping studies suggests that there is good correspondence between ERP results and those from brain imaging studies that map hemodynamic changes. This concordance is important because the combination of the high temporal resolution of ERPs with the high spatial resolution of hemodynamic imaging methods will provide a greatly increased understanding of the spatio-temporal dynamics of the brain networks that encode and retrieve explicit memories.",
"title": ""
},
{
"docid": "0ecb00d99dc497a0e902cda198219dff",
"text": "Security vulnerabilities typically arise from bugs in input validation and in the application logic. Fuzz-testing is a popular security evaluation technique in which hostile inputs are crafted and passed to the target software in order to reveal bugs. However, in the case of SCADA systems, the use of proprietary protocols makes it difficult to apply existing fuzz-testing techniques as they work best when the protocol semantics are known, targets can be instrumented and large network traces are available. This paper describes a fuzz-testing solution involving LZFuzz, an inline tool that provides a domain expert with the ability to effectively fuzz SCADA devices.",
"title": ""
},
{
"docid": "8a3d5500299676e160f661d87c13d617",
"text": "A novel method for visual place recognition is introduced and evaluated, demonstrating robustness to perceptual aliasing and observation noise. This is achieved by increasing discrimination through a more structured representation of visual observations. Estimation of observation likelihoods are based on graph kernel formulations, utilizing both the structural and visual information encoded in covisibility graphs. The proposed probabilistic model is able to circumvent the typically difficult and expensive posterior normalization procedure by exploiting the information available in visual observations. Furthermore, the place recognition complexity is independent of the size of the map. Results show improvements over the state-of-theart on a diverse set of both public datasets and novel experiments, highlighting the benefit of the approach.",
"title": ""
},
{
"docid": "3c444d8918a31831c2dc73985d511985",
"text": "This paper presents methods for collecting and analyzing physiological data during real-world driving tasks to determine a driver's relative stress level. Electrocardiogram, electromyogram, skin conductance, and respiration were recorded continuously while drivers followed a set route through open roads in the greater Boston area. Data from 24 drives of at least 50-min duration were collected for analysis. The data were analyzed in two ways. Analysis I used features from 5-min intervals of data during the rest, highway, and city driving conditions to distinguish three levels of driver stress with an accuracy of over 97% across multiple drivers and driving days. Analysis II compared continuous features, calculated at 1-s intervals throughout the entire drive, with a metric of observable stressors created by independent coders from videotapes. The results show that for most drivers studied, skin conductivity and heart rate metrics are most closely correlated with driver stress level. These findings indicate that physiological signals can provide a metric of driver stress in future cars capable of physiological monitoring. Such a metric could be used to help manage noncritical in-vehicle information systems and could also provide a continuous measure of how different road and traffic conditions affect drivers.",
"title": ""
},
{
"docid": "76151ea99f24bb16f98bf7793f253002",
"text": "The banning in 2006 of the use of antibiotics as animal growth promoters in the European Union has increased demand from producers for alternative feed additives that can be used to improve animal production. This review gives an overview of the most common non-antibiotic feed additives already being used or that could potentially be used in ruminant nutrition. Probiotics, dicarboxylic acids, enzymes and plant-derived products including saponins, tannins and essential oils are presented. The known modes of action and effects of these additives on feed digestion and more especially on rumen fermentations are described. Their utility and limitations in field conditions for modern ruminant production systems and their compliance with the current legislation are also discussed.",
"title": ""
},
{
"docid": "19b96cd469f1b81e45cf11a0530651a8",
"text": "only Painful initially, patient preference No cost Digitation Pilot RCTs 28 Potential risk of premature closure No cost Open wound (fig 4⇓) RCT = randomised controlled trial. For personal use only: See rights and reprints http://www.bmj.com/permissions Subscribe: http://www.bmj.com/subscribe BMJ 2017;356:j475 doi: 10.1136/bmj.j475 (Published 2017 February 21) Page 4 of 6",
"title": ""
},
{
"docid": "2fbe9db6c676dd64c95e72e8990c63f0",
"text": "Community detection is one of the most important problems in the field of complex networks in recent years. Themajority of present algorithms only find disjoint communities, however, community often overlap to some extent in many real-world networks. In this paper, an improvedmulti-objective quantum-behaved particle swarm optimization (IMOQPSO) based on spectral-clustering is proposed to detect the overlapping community structure in complex networks. Firstly, the line graph of the graph modeling the network is formed, and a spectral method is employed to extract the spectral information of the line graph. Secondly, IMOQPSO is employed to solve the multi-objective optimization problem so as to resolve the separated community structure in the line graph which corresponding to the overlapping community structure in the graph presenting the network. Finally, a fine-tuning strategy is adopted to improve the accuracy of community detection. The experiments on both synthetic and real-world networks demonstrate our method achieves cover results which fit the real situation in an even better fashion.",
"title": ""
},
{
"docid": "19977bf55573bb1d51a85b0a2febba2b",
"text": "In the general 3D scene, the correlation of depth image and corresponding color image exists, so many filtering methods have been proposed to improve the quality of depth images according to this correlation. Unlike the conventional methods, in this paper both depth and color information can be jointly employed to improve the quality of compressed depth image by the way of iterative guidance. Firstly, due to noises and blurring in the compressed image, a depth pre-filtering method is essential to remove artifact noises. Considering that the received geometry structure in the distorted depth image is more reliable than its color image, the color information is merged with depth image to get depth-merged color image. Then the depth image and its corresponding depth-merged color image can be used to refine the quality of the distorted depth image using joint iterative guidance filtering method. Therefore, the efficient depth structural information included in the distorted depth images are preserved relying on depth itself, while the corresponding color structural information are employed to improve the quality of depth image. We demonstrate the efficiency of the proposed filtering method by comparing objective and visual quality of the synthesized image with many existing depth filtering methods.",
"title": ""
},
{
"docid": "c174facf9854db5aae149e82f9f2a239",
"text": "A new feeding technique for printed Log-periodic dipole arrays (LPDAs) is presented, and used to design a printed LPDA operating between 4 and 18 GHz. The antenna has been designed using CST MICROWAVE STUDIO 2010, and the simulation results show that the antenna can be used as an Ultra Wideband Antenna in the range 6-9 GHz.",
"title": ""
},
{
"docid": "ee027c9ee2f66bc6cf6fb32a5697ee49",
"text": "Patellofemoral pain (PFP) is a very common problem in athletes who participate in jumping, cutting and pivoting sports. Several risk factors may play a part in the pathogenesis of PFP. Overuse, trauma and intrinsic risk factors are particularly important among athletes. Physical examination has a key role in PFP diagnosis. Furthermore, common risk factors should be investigated, such as hip muscle dysfunction, poor core muscle endurance, muscular tightness, excessive foot pronation and patellar malalignment. Imaging is seldom needed in special cases. Many possible interventions are recommended for PFP management. Due to the multifactorial nature of PFP, the clinical approach should be individualized, and the contribution of different factors should be considered and managed accordingly. In most cases, activity modification and rehabilitation should be tried before any surgical interventions.",
"title": ""
},
{
"docid": "296ce1f0dd7bf02c8236fa858bb1957c",
"text": "As many as one in 20 people in Europe and North America have some form of autoimmune disease. These diseases arise in genetically predisposed individuals but require an environmental trigger. Of the many potential environmental factors, infections are the most likely cause. Microbial antigens can induce cross-reactive immune responses against self-antigens, whereas infections can non-specifically enhance their presentation to the immune system. The immune system uses fail-safe mechanisms to suppress infection-associated tissue damage and thus limits autoimmune responses. The association between infection and autoimmune disease has, however, stimulated a debate as to whether such diseases might also be triggered by vaccines. Indeed there are numerous claims and counter claims relating to such a risk. Here we review the mechanisms involved in the induction of autoimmunity and assess the implications for vaccination in human beings.",
"title": ""
},
{
"docid": "045ce09ddca696e2882413a8d251c5f6",
"text": "Predicting student performance in tertiary institutions has potential to improve curriculum advice given to students, the planning of interventions for academic support and monitoring and curriculum design. The student performance prediction problem, as defined in this study, is the prediction of a student's mark for a module, given the student's performance in previously attempted modules. The prediction problem is amenable to machine learning techniques, provided that sufficient data is available for analysis. This work reports on a study undertaken at the College of Agriculture, Engineering and Science at University of KwaZulu- Natal that investigates the efficacy of Matrix Factorization as a technique for solving the prediction problem. The study uses Singular Value Decomposition (SVD), a Matrix Factorization technique that has been successfully used in recommender systems. The performance of the technique was benchmarked against the use of student and course average marks as predictors of performance. The results obtained suggests that Matrix Factorization performs better than both benchmarks.",
"title": ""
},
{
"docid": "d90acdfc572cf39d295cb78dd313e5f5",
"text": "The TORCS racing simulator has become a standard testbed used in many recent reinforcement learning competitions, where an agent must learn to drive a car around a track using a small set of task-specific features. In this paper, large, recurrent neural networks (with over 1 million weights) are evolved to solve a much more challenging version of the task that instead uses only a stream of images from the driver’s perspective as input. Evolving such large nets is made possible by representing them in the frequency domain as a set of coefficients that are transformed into weight matrices via an inverse Fourier-type transform. To our knowledge this is the first attempt to tackle TORCS using vision, and successfully evolve a neural network controllers of this size.",
"title": ""
},
{
"docid": "2bd5dd2d220d3715be8228050593c4ca",
"text": "We present a sensitivity analysis-based method for explaining prediction models that can be applied to any type of classification or regression model. Its advantage over existing general methods is that all subsets of input features are perturbed, so interactions and redundancies between features are taken into account. Furthermore, when explaining an additive model, the method is equivalent to commonly used additive model-specific methods. We illustrate the method’s usefulness with examples from artificial and real-world data sets and an empirical analysis of running times. Results from a controlled experiment with 122 participants suggest that the method’s explanations improved the participants’ understanding of the model.",
"title": ""
},
{
"docid": "f79eca0cafc35ed92fd8ffd2e7a4ab60",
"text": "We investigate the novel task of online dispute detection and propose a sentiment analysis solution to the problem: we aim to identify the sequence of sentence-level sentiments expressed during a discussion and to use them as features in a classifier that predicts the DISPUTE/NON-DISPUTE label for the discussion as a whole. We evaluate dispute detection approaches on a newly created corpus of Wikipedia Talk page disputes and find that classifiers that rely on our sentiment tagging features outperform those that do not. The best model achieves a very promising F1 score of 0.78 and an accuracy of 0.80.",
"title": ""
},
{
"docid": "aa1c565018371cf12e703e06f430776b",
"text": "We propose a graph-based semantic model for representing document content. Our method relies on the use of a semantic network, namely the DBpedia knowledge base, for acquiring fine-grained information about entities and their semantic relations, thus resulting in a knowledge-rich document model. We demonstrate the benefits of these semantic representations in two tasks: entity ranking and computing document semantic similarity. To this end, we couple DBpedia's structure with an information-theoretic measure of concept association, based on its explicit semantic relations, and compute semantic similarity using a Graph Edit Distance based measure, which finds the optimal matching between the documents' entities using the Hungarian method. Experimental results show that our general model outperforms baselines built on top of traditional methods, and achieves a performance close to that of highly specialized methods that have been tuned to these specific tasks.",
"title": ""
},
{
"docid": "9766e0507346e46e24790a4873979aa4",
"text": "Extreme learning machine (ELM) is proposed for solving a single-layer feed-forward network (SLFN) with fast learning speed and has been confirmed to be effective and efficient for pattern classification and regression in different fields. ELM originally focuses on the supervised, semi-supervised, and unsupervised learning problems, but just in the single domain. To our best knowledge, ELM with cross-domain learning capability in subspace learning has not been exploited very well. Inspired by a cognitive-based extreme learning machine technique (Cognit Comput. 6:376–390, 1; Cognit Comput. 7:263–278, 2.), this paper proposes a unified subspace transfer framework called cross-domain extreme learning machine (CdELM), which aims at learning a common (shared) subspace across domains. Three merits of the proposed CdELM are included: (1) A cross-domain subspace shared by source and target domains is achieved based on domain adaptation; (2) ELM is well exploited in the cross-domain shared subspace learning framework, and a new perspective is brought for ELM theory in heterogeneous data analysis; (3) the proposed method is a subspace learning framework and can be combined with different classifiers in recognition phase, such as ELM, SVM, nearest neighbor, etc. Experiments on our electronic nose olfaction datasets demonstrate that the proposed CdELM method significantly outperforms other compared methods.",
"title": ""
},
{
"docid": "9920660432c2a2cf1f83ed6b8412b433",
"text": "We propose a new approach for metric learning by framing it as learning a sparse combination of locally discriminative metrics that are inexpensive to generate from the training data. This flexible framework allows us to naturally derive formulations for global, multi-task and local metric learning. The resulting algorithms have several advantages over existing methods in the literature: a much smaller number of parameters to be estimated and a principled way to generalize learned metrics to new testing data points. To analyze the approach theoretically, we derive a generalization bound that justifies the sparse combination. Empirically, we evaluate our algorithms on several datasets against state-of-theart metric learning methods. The results are consistent with our theoretical findings and demonstrate the superiority of our approach in terms of classification performance and scalability.",
"title": ""
},
{
"docid": "9cc997e886bea0ac5006c9ee734b7906",
"text": "Additive manufacturing technology using inkjet offers several improvements to electronics manufacturing compared to current nonadditive masking technologies. Manufacturing processes can be made more efficient, straightforward and flexible compared to subtractive masking processes, several time-consuming and expensive steps can be omitted. Due to the additive process, material loss is minimal, because material is never removed as with etching processes. The amounts of used material and waste are smaller, which is advantageous in both productivity and environmental means. Furthermore, the additive inkjet manufacturing process is flexible allowing fast prototyping, easy design changes and personalization of products. Additive inkjet processing offers new possibilities to electronics integration, by enabling direct writing on various surfaces, and component interconnection without a specific substrate. The design and manufacturing of inkjet printed modules differs notably from the traditional way to manufacture electronics. In this study a multilayer inkjet interconnection process to integrate functional systems was demonstrated, and the issues regarding the design and manufacturing were considered. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f4be66419d03715ca686bea9665bf734",
"text": "Data augmentation is a key element in training high-dimensional models. In this approach, one synthesizes new observations by applying pre-specified transformations to the original training data; e.g. new images are formed by rotating old ones. Current augmentation schemes, however, rely on manual specification of the applied transformations, making data augmentation an implicit form of feature engineering. With an eye towards true end-to-end learning, we suggest learning the applied transformations on a per-class basis. Particularly, we align image pairs within each class under the assumption that the spatial transformation between images belongs to a large class of diffeomorphisms. We then learn a class-specific probabilistic generative models of the transformations in a Riemannian submanifold of the Lie group of diffeomorphisms. We demonstrate significant performance improvements in training deep neural nets over manually-specified augmentation schemes. Our code and augmented datasets are available online. Appearing in Proceedings of the 19 International Conference on Artificial Intelligence and Statistics (AISTATS) 2016, Cadiz, Spain. JMLR: W&CP volume 41. Copyright 2016 by the authors.",
"title": ""
}
] | scidocsrr |
2c821836875d230d3b478873ca4abcd9 | An Automated Methodology for Worker Path Generation and Safety Assessment in Construction Projects | [
{
"docid": "5bef975924d427c3ae186d92a93d4f74",
"text": "The Voronoi diagram of a set of sites partitions space into regions, one per site; the region for a site s consists of all points closer to s than to any other site. The dual of the Voronoi diagram, the Delaunay triangulation, is the unique triangulation such that the circumsphere of every simplex contains no sites in its interior. Voronoi diagrams and Delaunay triangulations have been rediscovered or applied in many areas of mathematics and the natural sciences; they are central topics in computational geometry, with hundreds of papers discussing algorithms and extensions. Section 27.1 discusses the definition and basic properties in the usual case of point sites in R with the Euclidean metric, while Section 27.2 gives basic algorithms. Some of the many extensions obtained by varying metric, sites, environment, and constraints are discussed in Section 27.3. Section 27.4 finishes with some interesting and nonobvious structural properties of Voronoi diagrams and Delaunay triangulations.",
"title": ""
}
] | [
{
"docid": "72e5b92632824d3633539727125763bc",
"text": "NB-IoT system focues on indoor coverage, low cost, long battery life, and enabling a large number of connected devices. The NB-IoT system in the inband mode should share the antenna with the LTE system and support mult-PRB to cover many terminals. Also, the number of used antennas should be minimized for price competitiveness. In this paper, the structure and implementation of the NB-IoT base station system will be describe.",
"title": ""
},
{
"docid": "a26dd0133a66a8868d84ef418bcaf9f5",
"text": "In performance display advertising a key metric of a campaign effectiveness is its conversion rate -- the proportion of users who take a predefined action on the advertiser website, such as a purchase. Predicting this conversion rate is thus essential for estimating the value of an impression and can be achieved via machine learning. One difficulty however is that the conversions can take place long after the impression -- up to a month -- and this delayed feedback hinders the conversion modeling. We tackle this issue by introducing an additional model that captures the conversion delay. Intuitively, this probabilistic model helps determining whether a user that has not converted should be treated as a negative sample -- when the elapsed time is larger than the predicted delay -- or should be discarded from the training set -- when it is too early to tell. We provide experimental results on real traffic logs that demonstrate the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "19f4100f2e1d5655edca03a269adf79a",
"text": "OBJECTIVES\nTo assess the influence of conventional glass ionomer cement (GIC) vs resin-modified GIC (RMGIC) as a base material for novel, super-closed sandwich restorations (SCSR) and its effect on shrinkage-induced crack propensity and in vitro accelerated fatigue resistance.\n\n\nMETHODS\nA standardized MOD slottype tooth preparation was applied to 30 extracted maxillary molars (5 mm depth/5 mm buccolingual width). A modified sandwich restoration was used, in which the enamel/dentin bonding agent was applied first (Optibond FL, Kerr), followed by a Ketac Molar (3M ESPE)(group KM, n = 15) or Fuji II LC (GC) (group FJ, n = 15) base, leaving 2 mm for composite resin material (Miris 2, Coltène-Whaledent). Shrinkageinduced enamel cracks were tracked with photography and transillumination. Samples were loaded until fracture or to a maximum of 185,000 cycles under isometric chewing (5 H z), starting with a load of 200 N (5,000 X), followed by stages of 400, 600, 800, 1,000, 1,200, and 1,400 N at a maximum of 30,000 X each. Groups were compared using the life table survival analysis (α = .008, Bonferroni method).\n\n\nRESULTS\nGroup FJ showed the highest survival rate (40% intact specimens) but did not differ from group KM (20%) or traditional direct restorations (13%, previous data). SCSR generated less shrinkage-induced cracks. Most failures were re-restorable (above the cementoenamel junction [CEJ]).\n\n\nCONCLUSIONS\nInclusion of GIC/RMGIC bases under large direct SCSRs does not affect their fatigue strength but tends to decrease the shrinkage-induced crack propensity.\n\n\nCLINICAL SIGNIFICANCE\nThe use of GIC/ RMGIC bases and the SCSR is an easy way to minimize polymerization shrinkage stress in large MOD defects without weakening the restoration.",
"title": ""
},
{
"docid": "08a6297a0959e0c12801b603d585e12c",
"text": "The national exchequer, the banking industry and regular citizens all incur a high overhead in using physical cash. Electronic cash and cell phone-based payment in particular is a viable alternative to physical cash since it incurs much lower overheads and offers more convenience. Because security is of paramount importance in financial transactions, it is imperative that attack vectors in this application be identified and analyzed. In this paper, we investigate vulnerabilities in several dimensions – in choice of hardware/software platform, in technology and in cell phone operating system. We examine how existing and future mobile worms can severely compromise the security of transacting payments through a cell phone.",
"title": ""
},
{
"docid": "b426696d7c1764502706696b0d462a34",
"text": "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.",
"title": ""
},
{
"docid": "b0991cd60b3e94c0ed3afede89e13f36",
"text": "It has been established that incorporating word cluster features derived from large unlabeled corpora can significantly improve prediction of linguistic structure. While previous work has focused primarily on English, we extend these results to other languages along two dimensions. First, we show that these results hold true for a number of languages across families. Second, and more interestingly, we provide an algorithm for inducing cross-lingual clusters and we show that features derived from these clusters significantly improve the accuracy of cross-lingual structure prediction. Specifically, we show that by augmenting direct-transfer systems with cross-lingual cluster features, the relative error of delexicalized dependency parsers, trained on English treebanks and transferred to foreign languages, can be reduced by up to 13%. When applying the same method to direct transfer of named-entity recognizers, we observe relative improvements of up to 26%.",
"title": ""
},
{
"docid": "eed788297c1b49895f8f19012b6231f2",
"text": "Can the choice of words and tone used by the authors of financial news articles correlate to measurable stock price movements? If so, can the magnitude of price movement be predicted using these same variables? We investigate these questions using the Arizona Financial Text (AZFinText) system, a financial news article prediction system, and pair it with a sentiment analysis tool. Through our analysis, we found that subjective news articles were easier to predict in price direction (59.0% versus 50.0% of chance alone) and using a simple trading engine, subjective articles garnered a 3.30% return. Looking further into the role of author tone in financial news articles, we found that articles with a negative sentiment were easiest to predict in price direction (50.9% versus 50.0% of chance alone) and a 3.04% trading return. Investigating negative sentiment further, we found that our system was able to predict price decreases in articles of a positive sentiment 53.5% of the time, and price increases in articles of a negative",
"title": ""
},
{
"docid": "b1e1d8dcd0fcd2a88b29f31c60b11a11",
"text": "Ergativity refers to patterning in a language whereby the subject of a transitive clause behaves differently to the subject of an intransitive clause, which behaves like the object of a transitive clause. Ergativity can be manifested in morphology, lexicon, syntax, and discourse organisation. This article overviews what is known about ergativity in the world’s languages, with a particular focus on one type of morphological ergativity, namely in case-marking. While languages are rarely entirely consistent in ergative case-marking, and the inconsistencies vary considerably across languages, they are nevertheless not random. Thus splits in casemarking, in which ergative patterning is restricted to certain domains, follow (with few exceptions) universal tendencies. So also are there striking cross-linguistic commonalities among systems in which ergative case-marking is optional, although systematic investigation of this domain is quite recent. Recent work on the diachrony of ergative systems and case-markers is overviewed, and issues for further research are identified.",
"title": ""
},
{
"docid": "445487bf85f9731b94f79a8efc9d2830",
"text": "The realism of avatars in terms of behavior and form is critical to the development of collaborative virtual environments. In the study we utilized state of the art, real-time face tracking technology to track and render facial expressions unobtrusively in a desktop CVE. Participants in dyads interacted with each other via either a video-conference (high behavioral realism and high form realism), voice only (low behavioral realism and low form realism), or an emotibox that rendered the dimensions of facial expressions abstractly in terms of color, shape, and orientation on a rectangular polygon (high behavioral realism and low form realism). Verbal and non-verbal self-disclosure were lowest in the videoconference condition while self-reported copresence and success of transmission and identification of emotions were lowest in the emotibox condition. Previous work demonstrates that avatar realism increases copresence while decreasing self-disclosure. We discuss the possibility of a hybrid realism solution that maintains high copresence without lowering self-disclosure, and the benefits of such an avatar on applications such as distance learning and therapy.",
"title": ""
},
{
"docid": "ceaddf275a66ffa7b39513ce2a2510e8",
"text": "This paper studies design and implementation of the Turbo encoder to be an embedded module in the in-vehicle system (IVS) chip. Field programmable gate array (FPGA) is employed to develop the Turbo encoder module. Both serial and parallel computations for the encoding technique are studied. The two design methods are presented and analyzed. Developing the parallel computation method, it is shown that both chip size and processing time are improved. The logic utilization is enhanced by 73% and the processing time is reduced by 58%. The Turbo encoder module is designed, simulated, and synthesized using Xilinx tools. Xilinx Zynq-7000 is employed as an FPGA device to implement the developed module. The Turbo encoder module is designed to be a part of the IVS chip on a single programmable device.",
"title": ""
},
{
"docid": "0e19123e438f39c4404d4bd486348247",
"text": "Boundary and edge cues are highly beneficial in improving a wide variety of vision tasks such as semantic segmentation, object recognition, stereo, and object proposal generation. Recently, the problem of edge detection has been revisited and significant progress has been made with deep learning. While classical edge detection is a challenging binary problem in itself, the category-aware semantic edge detection by nature is an even more challenging multi-label problem. We model the problem such that each edge pixel can be associated with more than one class as they appear in contours or junctions belonging to two or more semantic classes. To this end, we propose a novel end-to-end deep semantic edge learning architecture based on ResNet and a new skip-layer architecture where category-wise edge activations at the top convolution layer share and are fused with the same set of bottom layer features. We then propose a multi-label loss function to supervise the fused activations. We show that our proposed architecture benefits this problem with better performance, and we outperform the current state-of-the-art semantic edge detection methods by a large margin on standard data sets such as SBD and Cityscapes.",
"title": ""
},
{
"docid": "5088c5e6880f8557fa37b824b7d91b28",
"text": "Localization of sensor nodes is an important aspect in Wireless Sensor Networks (WSNs). This paper presents an overview of the major localization techniques for WSNs. These techniques are classified into centralized and distributed depending on where the computational effort is carried out. The paper concentrates on the factors that need to be considered when selecting a localization technique. The advantages and limitation of various techniques are also discussed. Finally, future research directions and challenges are highlighted.",
"title": ""
},
{
"docid": "6f26f4409d418fe69b1d43ec9b4f8b39",
"text": "Automatic understanding of human affect using visual signals is of great importance in everyday human–machine interactions. Appraising human emotional states, behaviors and reactions displayed in real-world settings, can be accomplished using latent continuous dimensions (e.g., the circumplex model of affect). Valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the activation of the emotion) constitute popular and effective representations for affect. Nevertheless, the majority of collected datasets this far, although containing naturalistic emotional states, have been captured in highly controlled recording conditions. In this paper, we introduce the Aff-Wild benchmark for training and evaluating affect recognition algorithms. We also report on the results of the First Affect-in-the-wild Challenge (Aff-Wild Challenge) that was recently organized in conjunction with CVPR 2017 on the Aff-Wild database, and was the first ever challenge on the estimation of valence and arousal in-the-wild. Furthermore, we design and extensively train an end-to-end deep neural architecture which performs prediction of continuous emotion dimensions based on visual cues. The proposed deep learning architecture, AffWildNet, includes convolutional and recurrent neural network layers, exploiting the invariant properties of convolutional features, while also modeling temporal dynamics that arise in human behavior via the recurrent layers. The AffWildNet produced state-of-the-art results on the Aff-Wild Challenge. We then exploit the AffWild database for learning features, which can be used as priors for achieving best performances both for dimensional, as well as categorical emotion recognition, using the RECOLA, AFEW-VA and EmotiW 2017 datasets, compared to all other methods designed for the same goal. The database and emotion recognition models are available at http://ibug.doc.ic.ac.uk/resources/first-affect-wild-challenge .",
"title": ""
},
{
"docid": "a9b4d5fed4cc45a7c9ce7b429d77855e",
"text": "In this paper, a cellular-connected unmanned aerial vehicle (UAV) mobile edge computing system is studied where several UAVs are associated to a terrestrial base station (TBS) for computation offloading. To compute the large amount of data bits, a part of computation task is migrated to TBS and the other part is locally handled at UAVs. Our goal is to minimize the total energy consumption of all UAVs by jointly adjusting the bit allocation, power allocation, resource partitioning as well as UAV trajectory under TBS’s energy budget. For deeply comprehending the impact of multi-UAV access strategy on the system performance, four access schemes in the uplink transmission is considered, i.e., time division multiple access, orthogonal frequency division multiple access, one-by-one access and non-orthogonal multiple access. The involved problems under different access schemes are all formulated in non-convex forms, which are difficult to be tackled optimally. To solve this class of problem, the successive convex approximation technique is employed to obtain the suboptimal solutions. The numerical results show that the proposed scheme save significant energy consumption compared with the benchmark schemes.",
"title": ""
},
{
"docid": "c4c95d67756bc85e69e67b4caee25269",
"text": "In this paper, we focus on the synthetic understanding of documents, specifically reading comprehension (RC). A current problem with RC is the need for a method of analyzing the RC system performance to realize further development. We propose a methodology for examining RC systems from multiple viewpoints. Our methodology consists of three steps: define a set of basic skills used for RC, manually annotate questions of an existing RC task, and show the performances for each skill of existing systems that have been proposed for the task. We demonstrated the proposed methodology by annotating MCTest, a freely available dataset for testing RC. The results of the annotation showed that answering RC questions requires combinations of multiple skills. In addition, our defined RC skills were found to be useful and promising for decomposing and analyzing the RC process. Finally, we discuss ways to improve our approach based on the results of two extra annotations.",
"title": ""
},
{
"docid": "f71c8f16ffeaacf8e7d81b357957ad89",
"text": "Multi-antenna technologies such as beamforming and Multiple-Input, Multiple-Output (MIMO) are anticipated to play a key role in “5G” systems, which are expected to be deployed in the year 2020 and beyond. With a class of 5G systems expected to be deployed in both cm-wave (3-30 GHz) and mm-wave (30-300 GHz) bands, the unique characteristics and challenges of those bands have prompted a revisiting of the design and performance tradeoffs associated with existing multi-antenna techniques in order to determine the preferred framework for deploying MIMO technology in 5G systems. In this paper, we discuss key implementation issues surrounding the deployment of transmit MIMO processing for 5G systems. We describe MIMO architectures where the transmit MIMO processing is implemented at baseband, RF, and a combination of RF and baseband (a hybrid approach). We focus on the performance and implementation issues surrounding several candidate techniques for multi-user-MIMO (MU-MIMO) transmission in the mm-wave bands.",
"title": ""
},
{
"docid": "534554ae5913f192d32efd93256488d6",
"text": "Several unclassified web services are available in the internet which is difficult for the user to choose the correct web services. This raises service discovery cost, transforming data time between services and service searching time. Adequate methods, tools, technologies for clustering the web services have been developed. The clustering of web services is done manually. This survey is organized based on clustering of web service discovery methods, tools and technologies constructed on following list of parameters. The parameters are clustering model, graphs and environment, different technologies, advantages and disadvantages, theory and proof of concepts. Based on the user requirements results are different and better than one another. If the web service clustering is done automatically that can create an impact in the service discovery and fulfills the user requirements. This article gives the overview of the significant issues of the different methods and discusses the lack of technologies and automatic tools of the web service discovery.",
"title": ""
},
{
"docid": "528a22ba860fd4ad4da3773ff2b01dcd",
"text": "During the last decade it has become more widely accepted that pet ownership and animal assistance in therapy and education may have a multitude of positive effects on humans. Here, we review the evidence from 69 original studies on human-animal interactions (HAI) which met our inclusion criteria with regard to sample size, peer-review, and standard scientific research design. Among the well-documented effects of HAI in humans of different ages, with and without special medical, or mental health conditions are benefits for: social attention, social behavior, interpersonal interactions, and mood; stress-related parameters such as cortisol, heart rate, and blood pressure; self-reported fear and anxiety; and mental and physical health, especially cardiovascular diseases. Limited evidence exists for positive effects of HAI on: reduction of stress-related parameters such as epinephrine and norepinephrine; improvement of immune system functioning and pain management; increased trustworthiness of and trust toward other persons; reduced aggression; enhanced empathy and improved learning. We propose that the activation of the oxytocin system plays a key role in the majority of these reported psychological and psychophysiological effects of HAI. Oxytocin and HAI effects largely overlap, as documented by research in both, humans and animals, and first studies found that HAI affects the oxytocin system. As a common underlying mechanism, the activation of the oxytocin system does not only provide an explanation, but also allows an integrative view of the different effects of HAI.",
"title": ""
},
{
"docid": "af5a2ad28ab61015c0344bf2e29fe6a7",
"text": "Recent years have shown that more than ever governments and intelligence agencies try to control and bypass the cryptographic means used for the protection of data. Backdooring encryption algorithms is considered as the best way to enforce cryptographic control. Until now, only implementation backdoors (at the protocol/implementation/management level) are generally considered. In this paper we propose to address the most critical issue of backdoors: mathematical backdoors or by-design backdoors, which are put directly at the mathematical design of the encryption algorithm. While the algorithm may be totally public, proving that there is a backdoor, identifying it and exploiting it, may be an intractable problem. We intend to explain that it is probably possible to design and put such backdoors. Considering a particular family (among all the possible ones), we present BEA-1, a block cipher algorithm which is similar to the AES and which contains a mathematical backdoor enabling an operational and effective cryptanalysis. The BEA-1 algorithm (80-bit block size, 120-bit key, 11 rounds) is designed to resist to linear and differential cryptanalyses. A challenge will be proposed to the cryptography community soon. Its aim is to assess whether our backdoor is easily detectable and exploitable or not.",
"title": ""
}
] | scidocsrr |
67542637f76d2fb3639d2f8431acad8f | Recent Developments of Magnetoresistive Sensors for Industrial Applications | [
{
"docid": "315af705427ee4363fe4614dc72eb7a7",
"text": "The 2007 Nobel Prize in Physics can be understood as a global recognition to the rapid development of the Giant Magnetoresistance (GMR), from both the physics and engineering points of view. Behind the utilization of GMR structures as read heads for massive storage magnetic hard disks, important applications as solid state magnetic sensors have emerged. Low cost, compatibility with standard CMOS technologies and high sensitivity are common advantages of these sensors. This way, they have been successfully applied in a lot different environments. In this work, we are trying to collect the Spanish contributions to the progress of the research related to the GMR based sensors covering, among other subjects, the applications, the sensor design, the modelling and the electronic interfaces, focusing on electrical current sensing applications.",
"title": ""
}
] | [
{
"docid": "9a515a1266a868ca5680fc5676ca4b37",
"text": "To assure that an autonomous car is driving safely on public roads, its object detection module should not only work correctly, but show its prediction confidence as well. Previous object detectors driven by deep learning do not explicitly model uncertainties in the neural network. We tackle with this problem by presenting practical methods to capture uncertainties in a 3D vehicle detector for Lidar point clouds. The proposed probabilistic detector represents reliable epistemic uncertainty and aleatoric uncertainty in classification and localization tasks. Experimental results show that the epistemic uncertainty is related to the detection accuracy, whereas the aleatoric uncertainty is influenced by vehicle distance and occlusion. The results also show that we can improve the detection performance by 1%–5% by modeling the aleatoric uncertainty.",
"title": ""
},
{
"docid": "78ffcec1e3d5164d7360aa8a93848fc4",
"text": "During a long period of time we are combating overfitting in the CNN training process with model regularization, including weight decay, model averaging, data augmentation, etc. In this paper, we present DisturbLabel, an extremely simple algorithm which randomly replaces a part of labels as incorrect values in each iteration. Although it seems weird to intentionally generate incorrect training labels, we show that DisturbLabel prevents the network training from over-fitting by implicitly averaging over exponentially many networks which are trained with different label sets. To the best of our knowledge, DisturbLabel serves as the first work which adds noises on the loss layer. Meanwhile, DisturbLabel cooperates well with Dropout to provide complementary regularization functions. Experiments demonstrate competitive recognition results on several popular image recognition datasets.",
"title": ""
},
{
"docid": "65c4d3f99a066c235bb5d946934bee05",
"text": "This paper describes a new Augmented Reality (AR) system called HoloLens developed by Microsoft, and the interaction model for supporting collaboration in this space with other users. Whereas traditional AR collaboration is between two or more head-mounted displays (HMD) users, we describe collaboration between a single HMD user and others who join the space by hitching on the view of the HMD user. The remote companions participate remotely through Skype-enabled devices such as tablets or PC's. The interaction is novel in the use of a 3D space with digital objects where the interaction by remote parties can be achieved asynchronously and reflected back to the primary user. We describe additional collaboration scenarios possible with this arrangement.",
"title": ""
},
{
"docid": "083b21b5d9feccf0f03350fab3af7fc1",
"text": "Abstraction without regret refers to the vision of using high-level programming languages for systems development without experiencing a negative impact on performance. A database system designed according to this vision offers both increased productivity and high performance instead of sacrificing the former for the latter as is the case with existing, monolithic implementations that are hard to maintain and extend.\n In this article, we realize this vision in the domain of analytical query processing. We present LegoBase, a query engine written in the high-level programming language Scala. The key technique to regain efficiency is to apply generative programming: LegoBase performs source-to-source compilation and optimizes database systems code by converting the high-level Scala code to specialized, low-level C code. We show how generative programming allows to easily implement a wide spectrum of optimizations, such as introducing data partitioning or switching from a row to a column data layout, which are difficult to achieve with existing low-level query compilers that handle only queries. We demonstrate that sufficiently powerful abstractions are essential for dealing with the complexity of the optimization effort, shielding developers from compiler internals and decoupling individual optimizations from each other.\n We evaluate our approach with the TPC-H benchmark and show that (a) with all optimizations enabled, our architecture significantly outperforms a commercial in-memory database as well as an existing query compiler. (b) Programmers need to provide just a few hundred lines of high-level code for implementing the optimizations, instead of complicated low-level code that is required by existing query compilation approaches. (c) These optimizations may potentially come at the cost of using more system memory for improved performance. (d) The compilation overhead is low compared to the overall execution time, thus making our approach usable in practice for compiling query engines.",
"title": ""
},
{
"docid": "e8197d339037ada47ed6db5f8f427211",
"text": "Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of precurved superelastic tubes and are capable of assuming complex 3-D curves. The family of 3-D curves that the robot can assume depends on the number, curvatures, lengths, and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedure- or patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery.",
"title": ""
},
{
"docid": "463d0bca287f0bd00585b4c96d12d014",
"text": "In this paper, we present a novel approach to extract songlevel descriptors built from frame-level timbral features such as Mel-frequency cepstral coefficient (MFCC). These descriptors are called identity vectors or i-vectors and are the results of a factor analysis procedure applied on framelevel features. The i-vectors provide a low-dimensional and fixed-length representation for each song and can be used in a supervised and unsupervised manner. First, we use the i-vectors for an unsupervised music similarity estimation, where we calculate the distance between i-vectors in order to predict the genre of songs. Second, for a supervised artist classification task we report the performance measures using multiple classifiers trained on the i-vectors. Standard datasets for each task are used to evaluate our method and the results are compared with the state of the art. By only using timbral information, we already achieved the state of the art performance in music similarity (which uses extra information such as rhythm). In artist classification using timbre descriptors, our method outperformed the state of the art.",
"title": ""
},
{
"docid": "8c9b360309da686a832cbf6eaee42db8",
"text": "System-level design issues become critical as implementation technology evolves toward increasingly complex integrated circuits and the time-to-market pressure continues relentlessly. To cope with these issues, new methodologies that emphasize re-use at all levels of abstraction are a “must”, and this is a major focus of our work in the Gigascale Silicon Research Center. We present some important concepts for system design that are likely to provide at least some of the gains in productivity postulated above. In particular, we focus on a method that separates parts of the design process and makes them nearly independent so that complexity could be mastered. In this domain, architecture-function co-design and communication-based design are introduced and motivated. Platforms are essential elements of this design paradigm. We define system platforms and we argue about their use and relevance. Then we present an application of the design methodology to the design of wireless systems. Finally, we present a new approach to platform-based design called modern embedded systems, compilers, architectures and languages, based on highly concurrent and software-programmable architectures and associated design tools.",
"title": ""
},
{
"docid": "b7c7978257a04ffef7a6dfc57f88126b",
"text": "Leaf spots can be indicative of crop diseases, where leaf batches (spots) are usually examined and subjected to expert opinion. In our proposed system, we are going to develop an integrated image processing system to help automated inspection of these leaf batches and helps identify the disease type. Conventional Expert systems mainly those which used to diagnose the disease in agriculture domain depends only on textual input. Usually abnormalities for a given crop are manifested as symptoms on various plant parts. To enable an expert system to produce correct results, end user must be capable of mapping what they see in a form of abnormal symptoms to answer to questions asked by that expert system. This mapping may be inconsistent if a full understanding of the abnormalities does not exist. The proposed system consists of four stages; the first is the enhancement, which includes HIS transformation, histogram analysis, and intensity adjustment. The second stage is segmentation, which includes adaptation of fuzzy c-means algorithm. Feature extraction is the third stage, which deals with three features, namely color size and shape of spot. The fourth stage is classification, which comprises back propagation based neural networks.",
"title": ""
},
{
"docid": "4357e361fd35bcbc5d6a7c195a87bad1",
"text": "In an age of increasing technology, the possibility that typing on a keyboard will replace handwriting raises questions about the future usefulness of handwriting skills. Here we present evidence that brain activation during letter perception is influenced in different, important ways by previous handwriting of letters versus previous typing or tracing of those same letters. Preliterate, five-year old children printed, typed, or traced letters and shapes, then were shown images of these stimuli while undergoing functional MRI scanning. A previously documented \"reading circuit\" was recruited during letter perception only after handwriting-not after typing or tracing experience. These findings demonstrate that handwriting is important for the early recruitment in letter processing of brain regions known to underlie successful reading. Handwriting therefore may facilitate reading acquisition in young children.",
"title": ""
},
{
"docid": "d302bfb7c2b95def93525050016ac07c",
"text": "Face recognition remains a challenge today as recognition performance is strongly affected by variability such as illumination, expressions and poses. In this work we apply Convolutional Neural Networks (CNNs) on the challenging task of both 2D and 3D face recognition. We constructed two CNN models, namely CNN-1 (two convolutional layers) and CNN-2 (one convolutional layer) for testing on 2D and 3D dataset. A comprehensive parametric study of two CNN models on face recognition is represented in which different combinations of activation function, learning rate and filter size are investigated. We find that CNN-2 has a better accuracy performance on both 2D and 3D face recognition. Our experimental results show that an accuracy of 85.15% was accomplished using CNN-2 on depth images with FRGCv2.0 dataset (4950 images with 557 objectives). An accuracy of 95% was achieved using CNN-2 on 2D raw image with the AT&T dataset (400 images with 40 objectives). The results indicate that the proposed CNN model is capable to handle complex information from facial images in different dimensions. These results provide valuable insights into further application of CNN on 3D face recognition.",
"title": ""
},
{
"docid": "f028a403190899f96fcd6d6f9efbd2f1",
"text": "It is aimed to design a X-band monopulse microstrip antenna array that can be used almost in all modern tracking radars and having superior properties in angle detection and angular accuracy than the classical ones. In order to create a monopulse antenna array, a rectangular microstrip antenna is designed and 16 of it gathered together using the nonlinear central feeding to suppress the side lobe level (SLL) of the antenna. The monopulse antenna is created by the combining 4 of these 4×4 array antennas with a microstrip comparator designed using four branch line coupler. Good agreement is noted between the simulation and measurement results.",
"title": ""
},
{
"docid": "f1f9aee8431f17e6e75492b38daad88d",
"text": "We examine the impact of strategy and dexterity on video games in which a player must use strategy to decide between multiple moves and must use dexterity to correctly execute those moves. We run simulation experiments on variants of two popular, interactive puzzle games: Tetris, which exhibits dexterity in the form of speed-accuracy time pressure, and Puzzle Bobble, which requires precise aiming. By modeling dexterity and strategy as separate components, we quantify the effect of each type of difficulty using normalized mean score and artificial intelligence agents that make human-like errors. We show how these techniques can model and visualize dexterity and strategy requirements as well as the effect of scoring systems on expressive range.",
"title": ""
},
{
"docid": "aae3e8f023b90bc2050d7c38a3857cc5",
"text": "Severe, chronic growth retardation of cattle early in life reduces growth potential, resulting in smaller animals at any given age. Capacity for long-term compensatory growth diminishes as the age of onset of nutritional restriction resulting in prolonged growth retardation declines. Hence, more extreme intrauterine growth retardation can result in slower growth throughout postnatal life. However, within the limits of beef production systems, neither severely restricted growth in utero nor from birth to weaning influences efficiency of nutrient utilisation later in life. Retail yield from cattle severely restricted in growth during pregnancy or from birth to weaning is reduced compared with cattle well grown early in life, when compared at the same age later in life. However, retail yield and carcass composition of low- and high-birth-weight calves are similar at the same carcass weight. At equivalent carcass weights, cattle grown slowly from birth to weaning have carcasses of similar or leaner composition than those grown rapidly. However, if high energy, concentrate feed is provided following severe growth restriction from birth to weaning, then at equivalent weights post-weaning the slowly-grown, small weaners may be fatter than their well-grown counterparts. Restricted prenatal and pre-weaning nutrition and growth do not adversely affect measures of beef quality. Similarly, bovine myofibre characteristics are little affected in the long term by growth in utero or from birth to weaning. Interactions were not evident between prenatal and pre-weaning growth for subsequent growth, efficiency, carcass, yield and beef-quality characteristics, within our pasture-based production systems. Furthermore, interactions between genotype and nutrition early in life, studied using offspring of Piedmontese and Wagyu sired cattle, were not evident for any growth, efficiency, carcass, yield and beef-quality parameters. We propose that within pasture-based production systems for beef cattle, the plasticity of the carcass tissues, particularly of muscle, allows animals that are growth-retarded early in life to attain normal composition at equivalent weights in the long term, albeit at older ages. However, the quality of nutrition during recovery from early life growth retardation may be important in determining the subsequent composition of young, light-weight cattle relative to their heavier counterparts. Finally, it should be emphasised that long-term consequences of more specific and/or acute environmental influences during specific stages of embryonic, foetal and neonatal calf development remain to be determined. This need for further research extends to consequences of nutrition and growth early in life for reproductive capacity.",
"title": ""
},
{
"docid": "8277f94cff0f5cd28ffbf5e0d6898c2a",
"text": "There is evidence that men experience more sexual arousal than women but also that women in mid-luteal phase experience more sexual arousal than women outside this phase. Recently, a few functional brain imaging studies have tackled the issue of gender differences as pertaining to reactions to erotica. The question of whether or not gender differences in reactions to erotica are maintained with women in different phases has not yet been answered from a functional brain imaging perspective. In order to examine this issue, functional MRI was performed in 22 male and 22 female volunteers. Subjects viewed erotic film excerpts alternating with emotionally neutral excerpts in a standard block-design paradigm. Arousal to erotic stimuli was evaluated using standard rating scales after scanning. Two-sample t-test with uncorrected P<0.001 values for a priori determined region of interests involved in processing of erotic stimuli and with corrected P<0.05 revealed gender differences: Comparing women in mid-luteal phase and during their menses, superior activation was revealed for women in mid-luteal phase in the anterior cingulate, left insula, and orbitofrontal cortex. A superior activation for men was found in the left thalamus, the bilateral amygdala, the anterior cingulate, the bilateral orbitofrontal, bilateral parahippocampal, and insular regions, which were maintained at a corrected P in the amygdala, the insula, and thalamus. There were no areas of significant superior activation for women neither in mid-luteal phase nor during their menses. Our results indicate that there are differences between women in the two cycle times in cerebral activity during viewing of erotic stimuli. Furthermore, gender differences with women in mid-luteal phases are similar to those in females outside the mid-luteal phase.",
"title": ""
},
{
"docid": "816bd541fd0f5cc509ad69cfed5d3e6e",
"text": "It has been shown that people and pets can harbour identical strains of meticillin-resistant (MR) staphylococci when they share an environment. Veterinary dermatology practitioners are a professional group with a high incidence of exposure to animals infected by Staphylococcus spp. The objective of this study was to assess the prevalence of carriage of MR Staphylococcus aureus (MRSA), MR S. pseudintermedius (MRSP) and MR S. schleiferi (MRSS) by veterinary dermatology practice staff and their personal pets. A swab technique and selective media were used to screen 171 veterinary dermatology practice staff and their respective pets (258 dogs and 160 cats). Samples were shipped by over-night carrier. Human subjects completed a 22-question survey of demographic and epidemiologic data relevant to staphylococcal transmission. The 171 human-source samples yielded six MRSA (3.5%), nine MRSP (5.3%) and four MRSS (2.3%) isolates, while 418 animal-source samples yielded eight MRSA (1.9%) 21 MRSP (5%), and two MRSS (0.5%) isolates. Concordant strains (genetically identical by pulsed-field gel electrophoresis) were isolated from human subjects and their respective pets in four of 171 (2.9%) households: MRSA from one person/two pets and MRSP from three people/three pets. In seven additional households (4.1%), concordant strains were isolated from only the pets: MRSA in two households and MRSP in five households. There were no demographic or epidemiologic factors statistically associated with either human or animal carriage of MR staphylococci, or with concordant carriage by person-pet or pet-pet pairs. Lack of statistical associations may reflect an underpowered study.",
"title": ""
},
{
"docid": "e05a7919e3e0333adef243694e7d50cb",
"text": "WHEN the magician pulls the rabbit from the hat, the spectator can respond either with mystification or with curiosity. He can enjoy the surprise and the wonder of the unexplained (and perhaps inexplicable), or he can search for an explanation. Suppose curiosity is his main response—that he adopts a scientist's attitude toward the mystery. What questions should a scientific theory of magic answer? First, it should predict the performance of a magician handling specified tasks—producing a rabbit from a hat, say. It should explain how the production takes place, what processes are used, and what mechanisms perform those processes. It should predict the incidental phenomena that accompany the magic—the magician's patter and his pretty assistant—and the relation of these to the mystification process. It should show how changes in the attendant conditions—both changes \"inside\" the members of the audience and changes in the feat of magic—alter the magician's behavior. It should explain how specific and general magician's skills are learned, and what the magician \"has\" when he has learned them.",
"title": ""
},
{
"docid": "e5481c18acb0ccbf8cefb55da1b2a60a",
"text": "Temporal database is a database which captures and maintains past, present and future data. Conventional databases are not suitable for handling such time varying data. In this context temporal database has gained a significant importance in the field of databases and data mining. The major objective of this research is to perform a detailed survey on temporal databases and the various temporal data mining techniques and explore the various research issues in temporal data mining. We also throw light on the temporal association rules and temporal clustering works carried in literature.",
"title": ""
},
{
"docid": "3f8247e958dcd262ee28d772ee050c30",
"text": "UNLABELLED\nAccurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation.\n\n\nPRACTITIONER SUMMARY\nThe size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety.",
"title": ""
},
{
"docid": "3beb3f808af2a2c04b74416fe1acf630",
"text": "A national survey, based on a probability sample of patients admitted to short-term hospitals in the United States during 1973 to 1974 with a discharge diagnosis of an intracranial neoplasm, was conducted in 157 hospitals. The annual incidence was estimated at 17,000 for primary intracranial neoplasms and 17,400 for secondary intracranial neoplasms--8.2 and 8.3 per 100,000 US population, respectively. Rates of primary intracranial neoplasms increased steadily with advancing age. The age-adjusted rates were higher among men than among women (8.5 versus 7.9 per 100,000). However, although men were more susceptible to gliomas and neuronomas, incidence rates for meningiomas and pituitary adenomas were higher among women.",
"title": ""
}
] | scidocsrr |
42660f76838d89f15f6179a53afee2e1 | State of the art and challenges facing consensus protocols on blockchain | [
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
},
{
"docid": "81d50714ba7a53d908f6b3e3030499c2",
"text": "Bit coin is widely regarded as the first broadly successful e-cash system. An oft-cited concern, though, is that mining Bit coins wastes computational resources. Indeed, Bit coin's underlying mining mechanism, which we call a scratch-off puzzle (SOP), involves continuously attempting to solve computational puzzles that have no intrinsic utility. We propose a modification to Bit coin that repurposes its mining resources to achieve a more broadly useful goal: distributed storage of archival data. We call our new scheme Perm coin. Unlike Bit coin and its proposed alternatives, Perm coin requires clients to invest not just computational resources, but also storage. Our scheme involves an alternative scratch-off puzzle for Bit coin based on Proofs-of-Retrievability (PORs). Successfully minting money with this SOP requires local, random access to a copy of a file. Given the competition among mining clients in Bit coin, this modified SOP gives rise to highly decentralized file storage, thus reducing the overall waste of Bit coin. Using a model of rational economic agents we show that our modified SOP preserves the essential properties of the original Bit coin puzzle. We also provide parameterizations and calculations based on realistic hardware constraints to demonstrate the practicality of Perm coin as a whole.",
"title": ""
},
{
"docid": "597bfef473a39b5bf2890a2a697e5c26",
"text": "Ripple is a payment system and a digital currency which evolved completely independently of Bitcoin. Although Ripple holds the second highest market cap after Bitcoin, there are surprisingly no studies which analyze the provisions of Ripple. In this paper, we study the current deployment of the Ripple payment system. For that purpose, we overview the Ripple protocol and outline its security and privacy provisions in relation to the Bitcoin system. We also discuss the consensus protocol of Ripple. Contrary to the statement of the Ripple designers, we show that the current choice of parameters does not prevent the occurrence of forks in the system. To remedy this problem, we give a necessary and sufficient condition to prevent any fork in the system. Finally, we analyze the current usage patterns and trade dynamics in Ripple by extracting information from the Ripple global ledger. As far as we are aware, this is the first contribution which sheds light on the current deployment of the Ripple system.",
"title": ""
},
{
"docid": "942fefe25be8a3409f70f290b202dd25",
"text": "This paper introduces a new model for consensus called federated Byzantine agreement (FBA). FBA achieves robustness through quorum slices—individual trust decisions made by each node that together determine system-level quorums. Slices bind the system together much the way individual networks’ peering and transit decisions now unify the Internet. We also present the Stellar Consensus Protocol (SCP), a construction for FBA. Like all Byzantine agreement protocols, SCP makes no assumptions about the rational behavior of attackers. Unlike prior Byzantine agreement models, which presuppose a unanimously accepted membership list, SCP enjoys open membership that promotes organic network growth. Compared to decentralized proof of-work and proof-of-stake schemes, SCP has modest computing and financial requirements, lowering the barrier to entry and potentially opening up financial systems to new participants.",
"title": ""
}
] | [
{
"docid": "a8cb644c1a7670670299d33c1e1e53d3",
"text": "In Java, C or C++, attempts to dereference the null value result in an exception or a segmentation fault. Hence, it is important to identify those program points where this undesired behaviour might occur or prove the other program points (and possibly the entire program) safe. To that purpose, null-pointer analysis of computer programs checks or infers non-null annotations for variables and object fields. With few notable exceptions, null-pointer analyses currently use run-time checks or are incorrect or only verify manually provided annotations. In this paper, we use abstract interpretation to build and prove correct a first, flow and context-sensitive static null-pointer analysis for Java bytecode (and hence Java) which infers non-null annotations. It is based on Boolean formulas, implemented with binary decision diagrams. For better precision, it identifies instance or static fields that remain always non-null after being initialised. Our experiments show this analysis faster and more precise than the correct null-pointer analysis by Hubert, Jensen and Pichardie. Moreover, our analysis deals with exceptions, which is not the case of most others; its formulation is theoretically clean and its implementation strong and scalable. We subsequently improve that analysis by using local reasoning about fields that are not always non-null, but happen to hold a non-null value when they are accessed. This is a frequent situation, since programmers typically check a field for non-nullness before its access. We conclude with an example of use of our analyses to infer null-pointer annotations which are more precise than those that other inference tools can achieve.",
"title": ""
},
{
"docid": "b6e6784d18c596565ca1e4d881398a0d",
"text": "Uncovering lies (or deception) is of critical importance to many including law enforcement and security personnel. Though these people may try to use many different tactics to discover deception, previous research tells us that this cannot be accomplished successfully without aid. This manuscript reports on the promising results of a research study where data and text mining methods along with a sample of real-world data from a high-stakes situation is used to detect deception. At the end, the information fusion based classification models produced better than 74% classification accuracy on the holdout sample using a 10-fold cross validation methodology. Nonetheless, artificial neural networks and decision trees produced accuracy rates of 73.46% and 71.60% respectively. However, due to the high stakes associated with these types of decisions, the extra effort of combining the models to achieve higher accuracy",
"title": ""
},
{
"docid": "893f3d5ab013a9c156139ef2626b7053",
"text": "Intelligent systems capable of automatically understanding natural language text are important for many artificial intelligence applications including mobile phone voice assistants, computer vision, and robotics. Understanding language often constitutes fitting new information into a previously acquired view of the world. However, many machine reading systems rely on the text alone to infer its meaning. In this paper, we pursue a different approach; machine reading methods that make use of background knowledge to facilitate language understanding. To this end, we have developed two methods: The first method addresses prepositional phrase attachment ambiguity. It uses background knowledge within a semi-supervised machine learning algorithm that learns from both labeled and unlabeled data. This approach yields state-of-the-art results on two datasets against strong baselines; The second method extracts relationships from compound nouns. Our knowledge-aware method for compound noun analysis accurately extracts relationships and significantly outperforms a baseline that does not make use of background knowledge.",
"title": ""
},
{
"docid": "dc26775493cad4149e639bcae6fa6a8c",
"text": "Fast expansion of natural language functionality of intelligent virtual agents is critical for achieving engaging and informative interactions. However, developing accurate models for new natural language domains is a time and data intensive process. We propose efficient deep neural network architectures that maximally re-use available resources through transfer learning. Our methods are applied for expanding the understanding capabilities of a popular commercial agent and are evaluated on hundreds of new domains, designed by internal or external developers. We demonstrate that our proposed methods significantly increase accuracy in low resource settings and enable rapid development of accurate models with less data.",
"title": ""
},
{
"docid": "6e60d6b878c35051ab939a03bdd09574",
"text": "We propose a new CNN-CRF end-to-end learning framework, which is based on joint stochastic optimization with respect to both Convolutional Neural Network (CNN) and Conditional Random Field (CRF) parameters. While stochastic gradient descent is a standard technique for CNN training, it was not used for joint models so far. We show that our learning method is (i) general, i.e. it applies to arbitrary CNN and CRF architectures and potential functions; (ii) scalable, i.e. it has a low memory footprint and straightforwardly parallelizes on GPUs; (iii) easy in implementation. Additionally, the unified CNN-CRF optimization approach simplifies a potential hardware implementation. We empirically evaluate our method on the task of semantic labeling of body parts in depth images and show that it compares favorably to competing techniques.",
"title": ""
},
{
"docid": "4fcfc5a8273ddbeff85a99189110482e",
"text": "Global information such as event-event association, and latent local information such as fine-grained entity types, are crucial to event classification. However, existing methods typically focus on sophisticated local features such as part-ofspeech tags, either fully or partially ignoring the aforementioned information. By contrast, this paper focuses on fully employing them for event classification. We notice that it is difficult to encode some global information such as eventevent association for previous methods. To resolve this problem, we propose a feasible approach which encodes global information in the form of logic using Probabilistic Soft Logic model. Experimental results show that, our proposed approach advances state-of-the-art methods, and achieves the best F1 score to date on the ACE data set.",
"title": ""
},
{
"docid": "0aa85d4ac0f2034351d5ba690929db19",
"text": "The quantity of small scale solar photovoltaic (PV) arrays in the United States has grown rapidly in recent years. As a result, there is substantial interest in high quality information about the quantity, power capacity, and energy generated by such arrays, including at a high spatial resolution (e.g., cities, counties, or other small regions). Unfortunately, existing methods for obtaining this information, such as surveys and utility interconnection filings, are limited in their completeness and spatial resolution. This work presents a computer algorithm that automatically detects PV panels using very high resolution color satellite imagery. The approach potentially offers a fast, scalable method for obtaining accurate information on PV array location and size, and at much higher spatial resolutions than are currently available. The method is validated using a very large (135 km) collection of publicly available (Bradbury et al., 2016) aerial imagery, with over 2700 human annotated PV array locations. The results demonstrate the algorithm is highly effective on a per-pixel basis. It is likewise effective at object-level PV array detection, but with significant potential for improvement in estimating the precise shape/size of the PV arrays. These results are the first of their kind for the detection of solar PV in aerial imagery, demonstrating the feasibility of the approach and establishing a baseline performance for future investigations. 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "187595fb12a5ca3bd665ffbbc9f47465",
"text": "In order to acquire a lexicon, young children must segment speech into words, even though most words are unfamiliar to them. This is a non-trivial task because speech lacks any acoustic analog of the blank spaces between printed words. Two sources of information that might be useful for this task are distributional regularity and phonotactic constraints. Informally, distributional regularity refers to the intuition that sound sequences that occur frequently and in a variety of contexts are better candidates for the lexicon than those that occur rarely or in few contexts. We express that intuition formally by a class of functions called DR functions. We then put forth three hypotheses: First, that children segment using DR functions. Second, that they exploit phonotactic constraints on the possible pronunciations of words in their language. Specifically, they exploit both the requirement that every word must have a vowel and the constraints that languages impose on word-initial and word-final consonant clusters. Third, that children learn which word-boundary clusters are permitted in their language by assuming that all permissible word-boundary clusters will eventually occur at utterance boundaries. Using computational simulation, we investigate the effectiveness of these strategies for segmenting broad phonetic transcripts of child-directed English. The results show that DR functions and phonotactic constraints can be used to significantly improve segmentation. Further, the contributions of DR functions and phonotactic constraints are largely independent, so using both yields better segmentation than using either one alone. Finally, learning the permissible word-boundary clusters from utterance boundaries does not degrade segmentation performance.",
"title": ""
},
{
"docid": "1121443f9b8ebf763bbb528cff43ace0",
"text": "This paper is to introduce the system design and implementation of a low-cost 360-degree true-color light emitting diode (LED) display system. This energy-saving LED display system employs both rotating scan scheme and a precision control in combination of time and space to provide three same screen displays of resolution 320×240 with only three columns of tricolor (Red, Green, Blue) LEDs. The frequency of scan and the speed of rotation are high enough to be indistinguishable by human eyes, hence the display appears to be constantly illuminated and the brilliant color image can be seen from any viewing angle. In order to realize colorful images in real time, both LED gradation driving method and display data processing are implemented in CPLD (Complex Programmable Logic Array). The experimental results show that the LED display system has an efficient and satisfying display quality, compared to the traditional LED dot matrix display and commercial surround color LED display.",
"title": ""
},
{
"docid": "e694235c1880560bd7fb820f2428363d",
"text": "This paper is focusing on the Facial Expression Recognition (FER) problem from a single face image. Inspired by the advances Convolutional Neural Networks (CNNs) have achieved in image recognition and classification, we propose a CNN-based approach to address this problem. Our model consists of several different structured subnets. Each subnet is a compact CNN model trained separately. The whole network is structured by assembling these subnets together. We trained and evaluated our model on the FER2013 dataset[7]. The best single subnet achieved 62.44% accuracy and the whole model scored 65.03% accuracy, which is ranked 9th and 5th respectively among all other participants.",
"title": ""
},
{
"docid": "29a13944cf4f43ef484512d978396c1e",
"text": "The literature examining the relationship between cardiorespiratory fitness and the brain in older adults has increased rapidly, with 30 of 34 studies published since 2008. Here we review cross-sectional and exercise intervention studies in older adults examining the relationship between cardiorespiratory fitness and brain structure and function, typically assessed using Magnetic Resonance Imaging (MRI). Studies of patients with Alzheimer's disease are discussed when available. The structural MRI studies revealed a consistent positive relationship between cardiorespiratory fitness and brain volume in cortical regions including anterior cingulate, lateral prefrontal, and lateral parietal cortex. Support for a positive relationship between cardiorespiratory fitness and medial temporal lobe volume was less consistent, although evident when a region-of-interest approach was implemented. In fMRI studies, cardiorespiratory fitness in older adults was associated with activation in similar regions as those identified in the structural studies, including anterior cingulate, lateral prefrontal, and lateral parietal cortex, despite heterogeneity among the functional tasks implemented. This comprehensive review highlights the overlap in brain regions showing a positive relationship with cardiorespiratory fitness in both structural and functional imaging modalities. The findings suggest that aerobic exercise and cardiorespiratory fitness contribute to healthy brain aging, although additional studies in Alzheimer's disease are needed.",
"title": ""
},
{
"docid": "1c14bf078018788f0ce3f38b5703eda0",
"text": "This paper presents a dual frequency band circularly polarized antenna system for Satellite Digital Audio Radio Service (SDARS) and Global Positioning System (GPS) applications. The proposed dual band antenna system consists of two circular patches each having four perpendicular slots for operating in two different frequency bands with left-handed circular polarization (LHCP) for SDARS and a right-handed circular polarization (RHCP) for GPS. The circular polarization characteristics for both antennas are obtained by choosing appropriate lengths of four perpendicular slots on the circular patches. The LHCP or RHCP radiation is readily obtained by selecting the proper feeding location. Results are presented in terms of the |S11|, radiation pattern and axial ratio.",
"title": ""
},
{
"docid": "a9f70ea201e17bca3b97f6ef7b2c1c15",
"text": "Network embedding task aims at learning low-dimension latent representations of vertices while preserving the structure of a network simultaneously. Most existing network embedding methods mainly focus on static networks, which extract and condense the network information without temporal information. However, in the real world, networks keep evolving, where the linkage states between the same vertex pairs at consequential timestamps have very close correlations. In this paper, we propose to study the network embedding problem and focus on modeling the linkage evolution in the dynamic network setting. To address this problem, we propose a deep dynamic network embedding method. More specifically, the method utilizes the historical information obtained from the network snapshots at past timestamps to learn latent representations of the future network. In the proposed embedding method, the objective function is carefully designed to incorporate both the network internal and network dynamic transition structures. Extensive empirical experiments prove the effectiveness of the proposed model on various categories of real-world networks, including a human contact network, a bibliographic network, and e-mail networks. Furthermore, the experimental results also demonstrate the significant advantages of the method compared with both the state-of-the-art embedding techniques and several existing baseline methods.",
"title": ""
},
{
"docid": "da7beedfca8e099bb560120fc5047399",
"text": "OBJECTIVE\nThis study aims to assess the relationship of late-night cell phone use with sleep duration and quality in a sample of Iranian adolescents.\n\n\nMETHODS\nThe study population consisted of 2400 adolescents, aged 12-18 years, living in Isfahan, Iran. Age, body mass index, sleep duration, cell phone use after 9p.m., and physical activity were documented. For sleep assessment, the Pittsburgh Sleep Quality Index questionnaire was used.\n\n\nRESULTS\nThe participation rate was 90.4% (n=2257 adolescents). The mean (SD) age of participants was 15.44 (1.55) years; 1270 participants reported to use cell phone after 9p.m. Overall, 56.1% of girls and 38.9% of boys reported poor quality sleep, respectively. Wake-up time was 8:17 a.m. (2.33), among late-night cell phone users and 8:03a.m. (2.11) among non-users. Most (52%) late-night cell phone users had poor sleep quality. Sedentary participants had higher sleep latency than their peers. Adjusted binary and multinomial logistic regression models showed that late-night cell users were 1.39 times more likely to have a poor sleep quality than non-users (p-value<0.001).\n\n\nCONCLUSION\nLate-night cell phone use by adolescents was associated with poorer sleep quality. Participants who were physically active had better sleep quality and quantity. As part of healthy lifestyle recommendations, avoidance of late-night cell phone use should be encouraged in adolescents.",
"title": ""
},
{
"docid": "bd09fb3d1c0ebdab1dba37bb5d8277a4",
"text": "Multimedia streaming applications typically experience high start-up delay, due to large protocol overheads and the poor delay, throughput, and loss properties of the Internet. Internet service providers can improve performance by caching the initial segment (the preex) of popular streams at proxies near the requesting clients. This paper analyzes the protocol and architectural challenges of realizing a preex-caching service within the RTP, RTSP, and RTCP standards. We emphasize how to exploit existing protocol features (such as Range requests), and suggest extensions to RTSP that would ease the development of new proxy services. In addition, we describe how to provide reliable transport, cache coherency, and RTCP feedback control under preex caching. Then, we present a preliminary implementation of preex caching on a Linux-based PC. We describe how the proxy operates with a commercial RTSP server and client. The paper ends with a summary of proposed extensions to RTSP and directions for future research.",
"title": ""
},
{
"docid": "840c74cc9f558b3b246ae36502b6f315",
"text": "Generative Adversarial Networks (GAN) have gained a lot of popularity from their introduction in 2014 till present. Research on GAN is rapidly growing and there are many variants of the original GAN focusing on various aspects of deep learning. GAN are perceived as the most impactful direction of machine learning in the last decade. This paper focuses on the application of GAN in autonomous driving including topics such as advanced data augmentation, loss function learning, semi-supervised learning, etc. We formalize and review key applications of adversarial techniques and discuss challenges and open problems to be addressed.",
"title": ""
},
{
"docid": "8bbc2ce1849d65425bece5ada5890b71",
"text": "The performance in higher secondary school education in India is a turning point in the academic lives of all students. As this academic performance is influenced by many factors, it is essential to develop predictive data mining model for students’ performance so as to identify the slow learners and study the influence of the dominant factors on their academic performance. In the present investigation, a survey cum experimental methodology was adopted to generate a database and it was constructed from a primary and a secondary source. While the primary data was collected from the regular students, the secondary data was gathered from the school and office of the Chief Educational Officer (CEO). A total of 1000 datasets of the year 2006 from five different schools in three different districts of Tamilnadu were collected. The raw data was preprocessed in terms of filling up missing values, transforming values in one form into another and relevant attribute/ variable selection. As a result, we had 772 student records, which were used for CHAID prediction model construction. A set of prediction rules were extracted from CHIAD prediction model and the efficiency of the generated CHIAD prediction model was found. The accuracy of the present model was compared with other model and it has been found to be satisfactory.",
"title": ""
},
{
"docid": "9571116e0d70a229970913e8b918b9be",
"text": "The reservoir capacity of dogs for Trypanosoma cruzi infection was analyzed in the city of Campeche, an urban town located in the Yucatan peninsula in Mexico. The city is inhabited by ~96,000 dogs and ~168,000 humans; Triatoma dimidiata is the only recognized vector. In the present study, we sampled 262 dogs (148 stray dogs and 114 pet dogs) and 2800 young people (ranging in age between 15 and 20 years old) and tested for T. cruzi antibodies by enzyme-linked immunosorbent assay, Indirect Immunofluorescence, and Western blotting serological assays. Seroprevalence in stray dogs was twice higher than in pet dogs (9.5% vs. 5.3%) with general seroprevalence of 7.6%. In humans, the observed seroprevalence was 76 times lower than in dogs (0.1% vs. 7.6%, respectively). Western blotting analysis showed that dogs' antibodies recognized different T. cruzi antigenic patterns than those for humans. In conclusion, T. cruzi infection in Campeche, Mexico, represents a low potential risk to inhabitants but deserves vigilance.",
"title": ""
},
{
"docid": "bf998f5d578e4b6412e67c24625d6716",
"text": "Bearings play a critical role in maintaining safety and reliability of rotating machinery. Bearings health condition prediction aims to prevent unexpected failures and minimize overall maintenance costs since it provides decision making information for condition-based maintenance. This paper proposes a Deep Belief Network (DBN)-based data-driven health condition prediction method for bearings. In this prediction method, a DBN is used as the predictor, which includes stacked RBMs and regression output. Our main contributions include development of a deep leaning-based data-driven prognosis solution that does not rely on explicit model equations and prognostic expertise, and providing comprehensive prediction results on five representative runto-failure bearings. The IEEE PHM 2012 challenge dataset is used to demonstrate the effectiveness of the proposed method, and the results are compared with two existing methods. The results show that the proposed method has promising performance in terms of short-term health condition prediction and remaining useful life prediction for bearings.",
"title": ""
}
] | scidocsrr |
471c86ef641f4ace978f2bacc14f2e98 | CUNY-UIUC-SRI TAC-KBP2011 Entity Linking System Description | [
{
"docid": "5c1d024cee59e16e7486c0a4b40ccd5e",
"text": "In this paper, we present a new ranking scheme, collaborative ranking (CR). In contrast to traditional non-collaborative ranking scheme which solely relies on the strengths of isolated queries and one stand-alone ranking algorithm, the new scheme integrates the strengths from multiple collaborators of a query and the strengths from multiple ranking algorithms. We elaborate three specific forms of collaborative ranking, namely, micro collaborative ranking (MiCR), macro collaborative ranking (MaCR) and micro-macro collaborative ranking (MiMaCR). Experiments on entity linking task show that our proposed scheme is indeed effective and promising.",
"title": ""
},
{
"docid": "84ced44b9f9a96714929ad78ed3f8732",
"text": "The CUNY-BLENDER team participated in the following tasks in TAC-KBP2010: Regular Entity Linking, Regular Slot Filling and Surprise Slot Filling task (per:disease slot). In the TAC-KBP program, the entity linking task is considered as independent from or a pre-processing step of the slot filling task. Previous efforts on this task mainly focus on utilizing the entity surface information and the sentence/document-level contextual information of the entity. Very little work has attempted using the slot filling results as feedback features to enhance entity linking. In the KBP2010 evaluation, the CUNY-BLENDER entity linking system explored the slot filling attributes that may potentially help disambiguate entity mentions. Evaluation results show that this feedback approach can achieve 9.1% absolute improvement on micro-average accuracy over the baseline using vector space model. For Regular Slot Filling we describe two bottom-up Information Extraction style pipelines and a top-down Question Answering style pipeline. Experiment results have shown that these pipelines are complementary and can be combined in a statistical re-ranking model. In addition, we present several novel approaches to enhance these pipelines, including query expansion, Markov Logic Networks based cross-slot/cross-system reasoning. Finally, as a diagnostic test, we also measured the impact of using external knowledge base and Wikipedia text mining on Slot Filling.",
"title": ""
}
] | [
{
"docid": "8c308305b4a04934126c4746c8333b52",
"text": "The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs.",
"title": ""
},
{
"docid": "1b7d2588cfa229aa3b2501a576be8cf2",
"text": "Hedonia (seeking pleasure and comfort) and eudaimonia (seeking to use and develop the best in oneself) are often seen as opposing pursuits, yet each may contribute to well-being in different ways. We conducted four studies (two correlational, one experience-sampling, and one intervention study) to determine outcomes associated with activities motivated by hedonic and eudaimonic aims. Overall, results indicated that: between persons (at the trait level) and within persons (at the momentary state level), hedonic pursuits related more to positive affect and carefreeness, while eudaimonic pursuits related more to meaning; between persons, eudaimonia related more to elevating experience (awe, inspiration, and sense of connection with a greater whole); within persons, hedonia related more negatively to negative affect; between and within persons, both pursuits related equally to vitality; and both pursuits showed some links with life satisfaction, though hedonia’s links were more frequent. People whose lives were high in both eudaimonia and hedonia had: higher degrees of most well-being variables than people whose lives were low in both pursuits (but did not differ in negative affect or carefreeness); higher positive affect and carefreeness than predominantly eudaimonic individuals; and higher meaning, elevating experience, and vitality than predominantly hedonic individuals. In the intervention study, hedonia produced more well-being benefits at short-term follow-up, while eudaimonia produced more at 3-month follow-up. The findings show that hedonia and eudaimonia occupy both overlapping and distinct niches within a complete picture of wellbeing, and their combination may be associated with the greatest well-being.",
"title": ""
},
{
"docid": "cbb8134d38905f9072d5eeec2fa82524",
"text": "Semiconductor manufacturing fabs generate huge amount of data. The big data approaches of data management have increased speed, quality and accessibility of the data. This paper discusses harnessing value from this data using predictive analytics methods. Various aspects predictive analytics in the context of semiconductor manufacturing are discussed. The limitations of standard methods of analysis and the need to adopt robust methods of modeling and analysis are highlighted. The robust prediction modeling method is implemented on wafer sensor data resulting in improved prediction ability of wafer quality characteristics.",
"title": ""
},
{
"docid": "05e3d07db8f5ecf3e446a28217878b56",
"text": "In this paper, we investigate the topic of gender identification for short length, multi-genre, content-free e-mails. We introduce for the first time (to our knowledge), psycholinguistic and gender-linked cues for this problem, along with traditional stylometric features. Decision tree and Support Vector Machines learning algorithms are used to identify the gender of the author of a given e-mail. The experiment results show that our approach is promising with an average accuracy of 82.2%.",
"title": ""
},
{
"docid": "73160df16943b2f788750b8f7141d290",
"text": "This letter proposes a double-sided printed bow-tie antenna for ultra wide band (UWB) applications. The frequency band considered is 3.1-10.6 GHz, which has been approved by the Federal Communications Commission as a commercial UWB band. The proposed antenna has a return loss less than 10 dB, phase linearity, and gain flatness over the above frequency band.",
"title": ""
},
{
"docid": "8cc12987072c983bc45406a033a467aa",
"text": "Vehicular drivers and shift workers in industry are at most risk of handling life critical tasks. The drivers traveling long distances or when they are tired, are at risk of a meeting an accident. The early hours of the morning and the middle of the afternoon are the peak times for fatigue driven accidents. The difficulty in determining the incidence of fatigue-related accidents is due, at least in part, to the difficulty in identifying fatigue as a causal or causative factor in accidents. In this paper we propose an alternative approach for fatigue detection in vehicular drivers using Respiration (RSP) signal to reduce the losses of the lives and vehicular accidents those occur due to cognitive fatigue of the driver. We are using basic K-means algorithm with proposed two modifications as classifier for detection of Respiration signal two state fatigue data recorded from the driver. The K-means classifiers [11] were trained and tested for wavelet feature of Respiration signal. The extracted features were treated as individual decision making parameters. From test results it could be found that some of the wavelet features could fetch 100 % classification accuracy.",
"title": ""
},
{
"docid": "3529e60736ef94de53f5f8e604509fc7",
"text": "Surgical workflow recognition has numerous potential medical applications, such as the automatic indexing of surgical video databases and the optimization of real-time operating room scheduling, among others. As a result, surgical phase recognition has been studied in the context of several kinds of surgeries, such as cataract, neurological, and laparoscopic surgeries. In the literature, two types of features are typically used to perform this task: visual features and tool usage signals. However, the used visual features are mostly handcrafted. Furthermore, the tool usage signals are usually collected via a manual annotation process or by using additional equipment. In this paper, we propose a novel method for phase recognition that uses a convolutional neural network (CNN) to automatically learn features from cholecystectomy videos and that relies uniquely on visual information. In previous studies, it has been shown that the tool usage signals can provide valuable information in performing the phase recognition task. Thus, we present a novel CNN architecture, called EndoNet, that is designed to carry out the phase recognition and tool presence detection tasks in a multi-task manner. To the best of our knowledge, this is the first work proposing to use a CNN for multiple recognition tasks on laparoscopic videos. Experimental comparisons to other methods show that EndoNet yields state-of-the-art results for both tasks.",
"title": ""
},
{
"docid": "f91daa578d75c6add8c7e4ce54fbd106",
"text": "Aviation spare parts provisioning is a highly complex problem. Traditionally, provisioning has been carried out using a conventional Poisson-based approach where inventory quantities are calculated separately for each part number and demands from different operations bases are consolidated into one single location. In an environment with multiple operations bases, however, such simplifications can lead to situations in which spares -- although available at another airport -- first have to be shipped to the location where the demand actually arose, leading to flight delays and cancellations. In this paper we demonstrate how simulation-based optimisation can help with the multi-location inventory problem by quantifying synergy potential between locations and how total service lifecycle cost can be further reduced without increasing risk right away from the Initial Provisioning (IP) stage onwards by taking into account advanced logistics policies such as pro-active re-balancing of spares between stocking locations.",
"title": ""
},
{
"docid": "6ef8db824a7b39100300b36e04c27578",
"text": "CRISPR/Cas9 is a rapidly developing genome editing technology that has been successfully applied in many organisms, including model and crop plants. Cas9, an RNA-guided DNA endonuclease, can be targeted to specific genomic sequences by engineering a separately encoded guide RNA with which it forms a complex. As only a short RNA sequence must be synthesized to confer recognition of a new target, CRISPR/Cas9 is a relatively cheap and easy to implement technology that has proven to be extremely versatile. Remarkably, in some plant species, homozygous knockout mutants can be produced in a single generation. Together with other sequence-specific nucleases, CRISPR/Cas9 is a game-changing technology that is poised to revolutionise basic research and plant breeding.",
"title": ""
},
{
"docid": "48072e0b5a49302982c643ae675f60c0",
"text": "News recommendation has become a big attraction with which major Web search portals retain their users. Contentbased Filtering and Collaborative Filtering are two effective methods, each serving a specific recommendation scenario. The Content-based Filtering approaches inspect rich contexts of the recommended items, while the Collaborative Filtering approaches predict the interests of long-tail users by collaboratively learning from interests of related users. We have observed empirically that, for the problem of news topic displaying, both the rich context of news topics and the long-tail users exist. Therefore, in this paper, we propose a Content-based Collaborative Filtering approach (CCF) to bring both Content-based Filtering and Collaborative Filtering approaches together. We found that combining the two is not an easy task, but the benefits of CCF are impressive. On one hand, CCF makes recommendations based on the rich contexts of the news. On the other hand, CCF collaboratively analyzes the scarce feedbacks from the long-tail users. We tailored this CCF approach for the news topic displaying on the Bing front page and demonstrated great gains in attracting users. In the experiments and analyses part of this paper, we discuss the performance gains and insights in news topic recommendation in Bing.",
"title": ""
},
{
"docid": "0cf3a201140e02039295a2ef4697a635",
"text": "In recent years, deep convolutional neural networks (ConvNet) have shown their popularity in various real world applications. To provide more accurate results, the state-of-the-art ConvNet requires millions of parameters and billions of operations to process a single image, which represents a computational challenge for general purpose processors. As a result, hardware accelerators such as Graphic Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs), have been adopted to improve the performance of ConvNet. However, GPU-based solution consumes a considerable amount of power and a traditional RTL design on FPGA requires tedious development that is very time-consuming. In this work, we propose a scalable and parameterized end-to-end ConvNet design using Intel FPGA SDK for OpenCL. To validate the design, we implement VGG 16 model on two different FPGA boards. Consequently, our designs achieve 306.41 GOPS on Intel Stratix A7 and 318.94 GOPS on Intel Arria 10 GX 10AX115. To the best of our knowledge, this outperforms previous FPGA-based accelerators. Compared to the CPU (Intel Xeon E5-2620) and a mid-range GPU (Nvidia K40), our design is 24.3X and 1.7X more energy efficient respectively.",
"title": ""
},
{
"docid": "eb4d350f389c6f046b81e4459fcb236c",
"text": "Customer relationship management (CRM) in business‐to‐business (B2B) e‐commerce Yun E. Zeng H. Joseph Wen David C. Yen Article information: To cite this document: Yun E. Zeng H. Joseph Wen David C. Yen, (2003),\"Customer relationship management (CRM) in business#to#business (B2B) e#commerce\", Information Management & Computer Security, Vol. 11 Iss 1 pp. 39 44 Permanent link to this document: http://dx.doi.org/10.1108/09685220310463722",
"title": ""
},
{
"docid": "982406008800456eaa147e6155963683",
"text": "[1] This study investigates how drought‐induced change in semiarid grassland community affected runoff and sediment yield in a small watershed in southeast Arizona, USA. Three distinct periods in ecosystem composition and associated runoff and sediment yield were identified according to dominant species: native bunchgrass (1974–2005), forbs (2006), and the invasive grass, Eragrostis lehmanniana (2007–2009). Precipitation, runoff, and sediment yield for each period were analyzed and compared at watershed and plot scales. Average watershed annual sediment yield was 0.16 t ha yr. Despite similarities in precipitation characteristics, decline in plant canopy cover during the transition period of 2006 caused watershed sediment yield to increase 23‐fold to 1.64 t ha yr comparing with preceding period under native bunchgrasses (0.06 t ha yr) or succeeding period under E. lehmanniana (0.06 t ha yr). In contrast, measurements on small runoff plots on the hillslopes of the same watershed showed a significant increase in sediment discharge that continued after E. lehmanniana replaced native grasses. Together, these findings suggest alteration in plant community increased sediment yield but that hydrological responses to this event differ at watershed and plot scales, highlighting the geomorphological controls at the watershed scale that determine sediment transport efficiency and storage. Resolving these scalar issues will help identify critical landform features needed to preserve watershed integrity under changing climate conditions.",
"title": ""
},
{
"docid": "3bb3c723e8342c8f5e466a591855591e",
"text": "Reputations that are transmitted from person to person can deter moral hazard and discourage entry by bad types in markets where players repeat transactions but rarely with the same player. On the Internet, information about past transactions may be both limited and potentially unreliable, but it can be distributed far more systematically than the informal gossip among friends that characterizes conventional marketplaces. One of the earliest and best known Internet reputation systems is run by eBay, which gathers comments from buyers and sellers about each other after each transaction. Examination of a large data set from 1999 reveals several interesting features of this system, which facilitates many millions of sales each month. First, despite incentives to free ride, feedback was provided more than half the time. Second, well beyond reasonable expectation, it was almost always positive. Third, reputation profiles were predictive of future performance. However, the net feedback scores that eBay displays encourages Pollyanna assessments of reputations, and is far from the best predictor available. Fourth, although sellers with better reputations were more likely to sell their items, they enjoyed no boost in price, at least for the two sets of items that we examined. Fifth, there was a high correlation between buyer and seller feedback, suggesting that the players reciprocate and retaliate.",
"title": ""
},
{
"docid": "21a356afff7c7b31895a3c11c2231d28",
"text": "There has been concern over the apparent conflict between privacy and data mining. There is no inherent conflict, as most types of data mining produce summary results that do not reveal information about individuals. The process of data mining may use private data, leading to the potential for privacy breaches. Secure Multiparty Computation shows that results can be produced without revealing the data used to generate them. The problem is that general techniques for secure multiparty computation do not scale to data-mining size computations. This paper presents an efficient protocol for securely determining the size of set intersection, and shows how this can be used to generate association rules where multiple parties have different (and private) information about the same set of individuals.",
"title": ""
},
{
"docid": "4eed1d650f0c3ce0f025364cf29724ee",
"text": "Cloud computing services including Infrastructure as a Service promise potential cost savings for businesses by offering remote, scalable computing resources. However attractive these services are, they pose significant security risks to customer applications and data beyond what is expected using traditional on-premises architecture. This paper identifies three basic types of threats related to the IaaS layer and eight kinds of attacks. These are aligned within the CIA model as a way to determine security risk.",
"title": ""
},
{
"docid": "515cbc485480e094320f23d142bd3b84",
"text": "Development of Emotional Intelligence Training for Certified Registered Nurse Anesthetists by Rickey King MSNA, Gooding Institute of Nurse Anesthesia, 2006 BSN, Jacksonville University, 2003 ASN, Oklahoma State University, 1988 Project Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Nursing Practice Walden University February 2016 Abstract The operating room is a high stress, high stakes, emotionally charged area with an interdisciplinary team that must work cohesively for the benefit of all. If an operating room staff does not understand those emotions, such a deficit can lead to decreased effective communication and an ineffectual response to problems. Emotional intelligence is a conceptual framework encompassing the ability to identify, assess, perceive, and manage emotions. The research question for this project is aimed at understanding how an educational intervention could help to improve the emotional intelligence of anesthetists and their ability to communicate with other operation room staff to produce effective problem solving. The purpose of this scholarly project was to design a 5-week evidence-based, educational intervention that will be implemented for 16 nurse anesthetists practicing in 3 rural hospitals in Southern Kentucky. The Emotional and Social Competency Inventory – University Edition will be offered to the nurse anesthetists prior to the educational intervention and 6 weeks post implementation to determine impact on the 12 core concepts of emotional intelligence which are categorized under self-awareness, social awareness, self-management, and relationship management. It is hoped that this project will improve emotional intelligence, which directly impacts interdisciplinary communication and produces effective problem solving and improved patient outcomes. The positive social change lies in the ability of the interdisciplinary participants to address stressful events benefitting patients, operating room personnel, and the anesthetist by decreasing negative outcomes and horizontal violence in the operating room.The operating room is a high stress, high stakes, emotionally charged area with an interdisciplinary team that must work cohesively for the benefit of all. If an operating room staff does not understand those emotions, such a deficit can lead to decreased effective communication and an ineffectual response to problems. Emotional intelligence is a conceptual framework encompassing the ability to identify, assess, perceive, and manage emotions. The research question for this project is aimed at understanding how an educational intervention could help to improve the emotional intelligence of anesthetists and their ability to communicate with other operation room staff to produce effective problem solving. The purpose of this scholarly project was to design a 5-week evidence-based, educational intervention that will be implemented for 16 nurse anesthetists practicing in 3 rural hospitals in Southern Kentucky. The Emotional and Social Competency Inventory – University Edition will be offered to the nurse anesthetists prior to the educational intervention and 6 weeks post implementation to determine impact on the 12 core concepts of emotional intelligence which are categorized under self-awareness, social awareness, self-management, and relationship management. It is hoped that this project will improve emotional intelligence, which directly impacts interdisciplinary communication and produces effective problem solving and improved patient outcomes. The positive social change lies in the ability of the interdisciplinary participants to address stressful events benefitting patients, operating room personnel, and the anesthetist by decreasing negative outcomes and horizontal violence in the operating room. Development of Emotional Intelligence Training for Certified Registered Nurse Anesthetists by Rickey King MSNA, Gooding Institute of Nurse Anesthesia, 2006 BSN, Jacksonville University, 2003 ASN, Oklahoma State University, 1988 Project Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Nursing Practice",
"title": ""
},
{
"docid": "ed9995e44ec14e26c0e8e8ee09a10d7c",
"text": "Information systems play a significant role in assisting and improving a university's operational work performance. The quality of IT service is also needed to provide an information system that matches with a university's needs. As a result, evaluations need to be conducted towards IT services in fulfilling the needs and providing satisfaction for information systems users. The purpose of this paper was to conduct a synthesis of the service work performance provided by the IT Division towards information systems users by using systematic literature review. The methodology used in this research was a literature study related with a COBIT and ITIL framework, as well as finding the interrelatedness between service managements toward an increase of IT work performance.",
"title": ""
},
{
"docid": "72f0041963a173ccf05facbd7d4f8075",
"text": "We propose a curiosity reward based on information theory principles and consistent with the animal instinct to maintain certain critical parameters within a bounded range. Our experimental validation shows the added value of the additional homeostatic drive to enhance the overall information gain of a reinforcement learning agent interacting with a complex environment using continuous actions. Our method builds upon two ideas: i) To take advantage of a new Bellman-like equation of information gain and ii) to simplify the computation of the local rewards by avoiding the approximation of complex distributions over continuous states and actions.",
"title": ""
},
{
"docid": "ad854ceb89e437ca59099453db33fa41",
"text": "Semi-supervised learning has recently emerged as a new paradigm in the machine learning community. It aims at exploiting simultaneously labeled and unlabeled data for classification. We introduce here a new semi-supervised algorithm. Its originality is that it relies on a discriminative approach to semisupervised learning rather than a generative approach, as it is usually the case. We present in details this algorithm for a logistic classifier and show that it can be interpreted as an instance of the Classification Expectation Maximization algorithm. We also provide empirical results on two data sets for sentence classification tasks and analyze the behavior of our methods.",
"title": ""
}
] | scidocsrr |
58d1f7d18ad2d0fb46ad8c16ac33e859 | Surveying Stylometry Techniques and Applications | [
{
"docid": "95e212c0b9b40b4dcb7dc4a94b0c0fd2",
"text": "In this paper we introduce and discuss a concept of syntactic n-grams (sn-grams). Sn-grams differ from traditional n-grams in the manner how we construct them, i.e., what elements are considered neighbors. In case of sn-grams, the neighbors are taken by following syntactic relations in syntactic trees, and not by taking words as they appear in a text, i.e., sn-grams are constructed by following paths in syntactic trees. In this manner, sn-grams allow bringing syntactic knowledge into machine learning methods; still, previous parsing is necessary for their construction. Sn-grams can be applied in any natural language processing (NLP) task where traditional n-grams are used. We describe how sn-grams were applied to authorship attribution. We used as baseline traditional n-grams of words, part of speech (POS) tags and characters; three classifiers were applied: support vector machines (SVM), naive Bayes (NB), and tree classifier J48. Sn-grams give better results with SVM classifier. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9d940e3fb357cfe03f0b206f816ea34f",
"text": "Plagiarism can be of many different natures, ranging from copying texts to adopting ideas, without giving credit to its originator. This paper presents a new taxonomy of plagiarism that highlights differences between literal plagiarism and intelligent plagiarism, from the plagiarist's behavioral point of view. The taxonomy supports deep understanding of different linguistic patterns in committing plagiarism, for example, changing texts into semantically equivalent but with different words and organization, shortening texts with concept generalization and specification, and adopting ideas and important contributions of others. Different textual features that characterize different plagiarism types are discussed. Systematic frameworks and methods of monolingual, extrinsic, intrinsic, and cross-lingual plagiarism detection are surveyed and correlated with plagiarism types, which are listed in the taxonomy. We conduct extensive study of state-of-the-art techniques for plagiarism detection, including character n-gram-based (CNG), vector-based (VEC), syntax-based (SYN), semantic-based (SEM), fuzzy-based (FUZZY), structural-based (STRUC), stylometric-based (STYLE), and cross-lingual techniques (CROSS). Our study corroborates that existing systems for plagiarism detection focus on copying text but fail to detect intelligent plagiarism when ideas are presented in different words.",
"title": ""
}
] | [
{
"docid": "36b4c028bcd92115107cf245c1e005c8",
"text": "CAPTCHA is now almost a standard security technology, and has found widespread application in commercial websites. Usability and robustness are two fundamental issues with CAPTCHA, and they often interconnect with each other. This paper discusses usability issues that should be considered and addressed in the design of CAPTCHAs. Some of these issues are intuitive, but some others have subtle implications for robustness (or security). A simple but novel framework for examining CAPTCHA usability is also proposed.",
"title": ""
},
{
"docid": "cf643602fc07aacbbbd21f249c85b857",
"text": "We propose an architecture that uses NAND flash memory to reduce main memory power in web server platforms. Our architecture uses a two level file buffer cache composed of a relatively small DRAM, which includes a primary file buffer cache, and a flash memory secondary file buffer cache. Compared to a conventional DRAM-only architecture, our architecture consumes orders of magnitude less idle power while remaining cost effective. This is a result of using flash memory, which consumes orders of magnitude less idle power than DRAM and is twice as dense. The client request behavior in web servers, allows us to show that the primary drawbacks of flash memory?endurance and long write latencies?can easily be overcome. In fact the wear-level aware management techniques that we propose are not heavily used.",
"title": ""
},
{
"docid": "772352a86880d517bbb6c1846e220a1e",
"text": "We discuss several state-of-the-art computationally cheap, as opposed to the polynomial time Interior Point algorithms, first order methods for minimizing convex objectives over “simple” large-scale feasible sets. Our emphasis is on the general situation of a nonsmooth convex objective represented by deterministic/stochastic First Order oracle and on the methods which, under favorable circumstances, exhibit (nearly) dimension-independent convergence rate.",
"title": ""
},
{
"docid": "32c44619bfd4013edaec5fc923cfd7a6",
"text": "Neural Machine Translation (NMT) is a new approach for autom atic translation of text from one human language into another. The basic concept in NMT is t o train a large Neural Network that maximizes the translation performance on a given p arallel corpus. NMT is gaining popularity in the research community because it outperform ed traditional SMT approaches in several translation tasks at WMT and other evaluation tas ks/benchmarks at least for some language pairs. However, many of the enhancements in SMT ove r the years have not been incorporated into the NMT framework. In this paper, we focus on one such enhancement namely domain adaptation. We propose an approach for adapting a NMT system to a new domain. The main idea behind domain adaptation is that the availability of large out-of-domain training data and a small in-domain training data. We report significant ga ins with our proposed method in both automatic metrics and a human subjective evaluation me tric on two language pairs. With our adaptation method, we show large improvement on the new d omain while the performance of our general domain only degrades slightly. In addition, o ur approach is fast enough to adapt an already trained system to a new domain within few hours wit hout the need to retrain the NMT model on the combined data which usually takes several da ys/weeks depending on the volume of the data.",
"title": ""
},
{
"docid": "7d3ef8bdc50bd2931d8cb31683b35e90",
"text": "This paper characterizes the performance of a generic $K$-tier cache-aided heterogeneous network (CHN), in which the base stations (BSs) across tiers differ in terms of their spatial densities, transmission powers, pathloss exponents, activity probabilities conditioned on the serving link and placement caching strategies. We consider that each user connects to the BS which maximizes its average received power and at the same time caches its file of interest. Modeling the locations of the BSs across different tiers as independent homogeneous Poisson Point processes (HPPPs), we derive closed-form expressions for the coverage probability and local delay experienced by a typical user in receiving each requested file. We show that our results for coverage probability and delay are consistent with those previously obtained in the literature for a single tier system.",
"title": ""
},
{
"docid": "afc6f1531a5b9ff3f7d0d93bc1ff3183",
"text": "Wounds are of a variety of types and each category has its own distinctive healing requirements. This realization has spurred the development of a myriad of wound dressings, each with specific characteristics. It is unrealistic to expect a singular dressing to embrace all characteristics that would fulfill generic needs for wound healing. However, each dressing may approach the ideal requirements by deviating from the 'one size fits all approach', if it conforms strictly to the specifications of the wound and the patient. Indeed, a functional wound dressing should achieve healing of the wound with minimal time and cost expenditures. This article offers an insight into several different types of polymeric materials clinically used in wound dressings and the events taking place at cellular level, which aid the process of healing, while the biomaterial dressing interacts with the body tissue. Hence, the significance of using synthetic polymer films, foam dressings, hydrocolloids, alginate dressings, and hydrogels has been reviewed, and the properties of these materials that conform to wound-healing requirements have been explored. A special section on bioactive dressings and bioengineered skin substitutes that play an active part in healing process has been re-examined in this work.",
"title": ""
},
{
"docid": "ee28b18ff5309a9e23f0bd33652acbde",
"text": "DC microgrids may have time-varying system structures and operation patterns due to the flexibility and uncertainty of distributed resources. This feature poses a challenge to conventional stability analysis methods, which are based on fixed and complete system models. To solve this problem, the concept of self-disciplined stabilization is introduced in this paper. A common stability discipline is established using the passivity-based control theory, which ensures that a microgrid is always stable as long as this discipline is complied by each individual converter. In this way, the stabilization task is localized to avoid investigating the entire microgrid, thereby providing immunity against system variations. Moreover, a passivity margin criterion is proposed to further enhance the stability margin of the self-disciplined control. The modified criterion imposes a tighter phase restriction to provide explicit phase margins and prevent under-damped transient oscillations. In line with this criterion, a practical control algorithm is also derived, which increases the converter's passivity through voltage feed forward. The major theoretical conclusions are verified by a laboratory DC microgrid test bench.",
"title": ""
},
{
"docid": "19e7b6c34c763952112c8492450de2b5",
"text": "Handling intellectual property involves the cognitive process of understanding the innovation described in the body of patent claims. In this paper we present an on-going project on a multi-level text simplification to assist experts in this complex task. Two levels of simplification procedure are described. The macro-level simplification results in the visualization of the hierarchy of multiple claims. The micro-level simplification includes visualization of the claim terminology, decomposition of the claim complex structure into a set of simple sentences and building a graph explicitly showing the interrelations of the invention elements. The methodology is implemented in an experimental text simplifying computer system. The motivation underlying this research is to develop tools that could increase the overall productivity of human users and machines in processing patent applications.",
"title": ""
},
{
"docid": "e28b0ab1bedd60ba83b8a575431ad549",
"text": "The Decision Model and Notation (DMN) is a standard notation to specify decision logic in business applications. A central construct in DMN is a decision table. The rising use of DMN decision tables to capture and to automate everyday business decisions fuels the need to support analysis tasks on decision tables. This paper presents an opensource DMN editor to tackle three analysis tasks: detection of overlapping rules, detection of missing rules and simplification of decision tables via rule merging. The tool has been tested on large decision tables derived from a credit lending data-set.",
"title": ""
},
{
"docid": "a95094552dad7270bdaa73e2c7351ab4",
"text": "Unlike most domestic livestock species, sheep are widely known as an animal with marked seasonality of breeding activity. The annual cycle of daily photoperiod has been identified as the determinant factor of this phenomenon, while environmental temperature, nutritional status, social interactions, lambing date and lactation period are considered to modulate it. The aim of this paper is to review the current state of knowledge of the reproductive seasonality in sheep. Following general considerations concerning the importance of seasonal breeding as a reproductive strategy for the survival of species, the paper describes the manifestations of seasonality in both the ram and the ewe. Both determinant and modulating factors are developed and special emphasis is given to the neuroendocrine base of photoperiodic regulation of seasonal breeding. Other aspects such as the role of melatonin, the involvement of thyroid hormones and the concept of photorefractoriness are also reviewed. © 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "cac3a510f876ed255ff87f2c0db2ed8e",
"text": "The resurgence of cancer immunotherapy stems from an improved understanding of the tumor microenvironment. The PD-1/PD-L1 axis is of particular interest, in light of promising data demonstrating a restoration of host immunity against tumors, with the prospect of durable remissions. Indeed, remarkable clinical responses have been seen in several different malignancies including, but not limited to, melanoma, lung, kidney, and bladder cancers. Even so, determining which patients derive benefit from PD-1/PD-L1-directed immunotherapy remains an important clinical question, particularly in light of the autoimmune toxicity of these agents. The use of PD-L1 (B7-H1) immunohistochemistry (IHC) as a predictive biomarker is confounded by multiple unresolved issues: variable detection antibodies, differing IHC cutoffs, tissue preparation, processing variability, primary versus metastatic biopsies, oncogenic versus induced PD-L1 expression, and staining of tumor versus immune cells. Emerging data suggest that patients whose tumors overexpress PD-L1 by IHC have improved clinical outcomes with anti-PD-1-directed therapy, but the presence of robust responses in some patients with low levels of expression of these markers complicates the issue of PD-L1 as an exclusionary predictive biomarker. An improved understanding of the host immune system and tumor microenvironment will better elucidate which patients derive benefit from these promising agents.",
"title": ""
},
{
"docid": "8d02b303ad5fc96a082880d703682de4",
"text": "Feature engineering remains a major bottleneck when creating predictive systems from electronic medical records. At present, an important missing element is detecting predictive <italic>regular clinical motifs</italic> from <italic> irregular episodic records</italic>. We present <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math> </inline-formula> (short for <italic>Deep</italic> <italic>r</italic>ecord), a new <italic>end-to-end</italic> deep learning system that learns to extract features from medical records and predicts future risk automatically. <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math></inline-formula> transforms a record into a sequence of discrete elements separated by coded time gaps and hospital transfers. On top of the sequence is a convolutional neural net that detects and combines predictive local clinical motifs to stratify the risk. <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math></inline-formula> permits transparent inspection and visualization of its inner working. We validate <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$ </tex-math></inline-formula> on hospital data to predict unplanned readmission after discharge. <inline-formula> <tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math></inline-formula> achieves superior accuracy compared to traditional techniques, detects meaningful clinical motifs, and uncovers the underlying structure of the disease and intervention space.",
"title": ""
},
{
"docid": "1489207c35a613d38a4f9c06816604f0",
"text": "Switching common-mode voltage (CMV) generated by the pulse width modulation (PWM) of the inverter causes common-mode currents, which lead to motor bearing failures and electromagnetic interference problems in multiphase drives. Such switching CMV can be reduced by taking advantage of the switching states of multilevel multiphase inverters that produce zero CMV. Specific space-vector PWM (SVPWM) techniques with CMV elimination, which only use zero CMV states, have been proposed for three-level five-phase drives, and for open-end winding five-, six-, and seven-phase drives, but such methods cannot be extended to a higher number of levels or phases. This paper presents a general (for any number of levels and phases) SVPMW with CMV elimination. The proposed technique can be applied to most multilevel topologies, has low computational complexity and is suitable for low-cost hardware implementations. The new algorithm is implemented in a low-cost field-programmable gate array and it is successfully tested in the laboratory using a five-level five-phase motor drive.",
"title": ""
},
{
"docid": "3a46f6ff14e4921fa9bcdfdc9064b754",
"text": "Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool deep learning models by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. We further propose attack methods based on genetic algorithms and gradient descent in the scenario where additional prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.",
"title": ""
},
{
"docid": "bff8ad5f962f501b299a0f69a0a820fd",
"text": "Many methods for object recognition, segmentation, etc., rely on tessellation of an image into “superpixels”. A superpixel is an image patch which is better aligned with intensity edges than a rectangular patch. Superpixels can be extracted with any segmentation algorithm, however, most of them produce highly irregular superpixels, with widely varying sizes and shapes. A more regular space tessellation may be desired. We formulate the superpixel partitioning problem in an energy minimization framework, and optimize with graph cuts. Our energy function explicitly encourages regular superpixels. We explore variations of the basic energy, which allow a trade-off between a less regular tessellation but more accurate boundaries or better efficiency. Our advantage over previous work is computational efficiency, principled optimization, and applicability to 3D “supervoxel” segmentation. We achieve high boundary recall on 2D images and spatial coherence on video. We also show that compact superpixels improve accuracy on a simple application of salient object segmentation.",
"title": ""
},
{
"docid": "7282b16c6a433c318a93e270125777ff",
"text": "Background: Tooth extraction is associated with dimensional changes in the alveolar ridge. The aim was to examine the effect of single versus contiguous teeth extractions on the alveolar ridge remodeling. Material and Methods: Five female beagle dogs were randomly divided into three groups on the basis of location (anterior or posterior) and number of teeth extracted – exctraction socket classification: group 1 (one dog): single-tooth extraction; group 2 (two dogs): extraction of two teeth; and group 3 (two dogs): extraction of three teeth in four anterior sites and four posterior sites in both jaws. The dogs were sacrificed after 4 months. Sagittal sectioning of each extraction site was performed and evaluated using microcomputed tomography. Results: Buccolingual or palatal bone loss was observed 4 months after extraction in all three groups. The mean of the alveolar ridge width loss in group 1 (single-tooth extraction) was significantly less than those in groups 2 and 3 (p < .001) (multiple teeth extraction). Three-teeth extraction (group 3) had significantly more alveolar bone loss than two-teeth extraction (group 2) (p < .001). The three-teeth extraction group in the upper and lower showed more obvious resorption on the palatal/lingual side especially in the lower group posterior locations. Conclusion: Contiguous teeth extraction caused significantly more alveolar ridge bone loss as compared with when a single tooth is extracted.",
"title": ""
},
{
"docid": "05dc76d17fea57d22de982f9590e386b",
"text": "Hierarchical multi-label classification assigns a document to multiple hierarchical classes. In this paper we focus on hierarchical multi-label classification of social text streams. Concept drift, complicated relations among classes, and the limited length of documents in social text streams make this a challenging problem. Our approach includes three core ingredients: short document expansion, time-aware topic tracking, and chunk-based structural learning. We extend each short document in social text streams to a more comprehensive representation via state-of-the-art entity linking and sentence ranking strategies. From documents extended in this manner, we infer dynamic probabilistic distributions over topics by dividing topics into dynamic \"global\" topics and \"local\" topics. For the third and final phase we propose a chunk-based structural optimization strategy to classify each document into multiple classes. Extensive experiments conducted on a large real-world dataset show the effectiveness of our proposed method for hierarchical multi-label classification of social text streams.",
"title": ""
},
{
"docid": "7323cf16224197b312d1a4c7ff4168ea",
"text": "It is well known that animals can use neural and sensory feedback via vision, tactile sensing, and echolocation to negotiate obstacles. Similarly, most robots use deliberate or reactive planning to avoid obstacles, which relies on prior knowledge or high-fidelity sensing of the environment. However, during dynamic locomotion in complex, novel, 3D terrains, such as a forest floor and building rubble, sensing and planning suffer bandwidth limitation and large noise and are sometimes even impossible. Here, we study rapid locomotion over a large gap-a simple, ubiquitous obstacle-to begin to discover the general principles of the dynamic traversal of large 3D obstacles. We challenged the discoid cockroach and an open-loop six-legged robot to traverse a large gap of varying length. Both the animal and the robot could dynamically traverse a gap as large as one body length by bridging the gap with its head, but traversal probability decreased with gap length. Based on these observations, we developed a template that accurately captured body dynamics and quantitatively predicted traversal performance. Our template revealed that a high approach speed, initial body pitch, and initial body pitch angular velocity facilitated dynamic traversal, and successfully predicted a new strategy for using body pitch control that increased the robot's maximal traversal gap length by 50%. Our study established the first template of dynamic locomotion beyond planar surfaces, and is an important step in expanding terradynamics into complex 3D terrains.",
"title": ""
},
{
"docid": "4ecc49bb99ade138783899b6f9b47f16",
"text": "This paper compares direct reinforcement learning (no explicit model) and model-based reinforcement learning on a simple task: pendulum swing up. We nd that in this task model-based approaches support reinforcement learning from smaller amounts of training data and eecient handling of changing goals.",
"title": ""
}
] | scidocsrr |
c17a371ffefd9c5c0cbe17c77542b520 | FaceDate: a mobile cloud computing app for people matching | [
{
"docid": "9d29198002d601cc3d84f3c159c0b975",
"text": "Avatar is a system that leverages cloud resources to support fast, scalable, reliable, and energy efficient distributed computing over mobile devices. An avatar is a per-user software entity in the cloud that runs apps on behalf of the user's mobile devices. The avatars are instantiated as virtual machines in the cloud that run the same operating system with the mobile devices. In this way, avatars provide resource isolation and execute unmodified app components, which simplifies technology adoption. Avatar apps execute over distributed and synchronized (mobile device, avatar) pairs to achieve a global goal. The three main challenges that must be overcome by the Avatar system are: creating a high-level programming model and a middleware that enable effective execution of distributed applications on a combination of mobile devices and avatars, re-designing the cloud architecture and protocols to support billions of mobile users and mobile apps with very different characteristics from the current cloud workloads, and explore new approaches that balance privacy guarantees with app efficiency/usability. We have built a basic Avatar prototype on Android devices and Android x86 virtual machines. An application that searches for a lost child by analyzing the photos taken by people at a crowded public event runs on top of this prototype.",
"title": ""
},
{
"docid": "b76af76207fa3ef07e8f2fbe6436dca0",
"text": "Face recognition applications for airport security and surveillance can benefit from the collaborative coupling of mobile and cloud computing as they become widely available today. This paper discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The challenge lies with how to perform task partitioning from mobile devices to cloud and distribute compute load among cloud servers (cloudlet) to minimize the response time given diverse communication latencies and server compute powers. Our preliminary simulation results show that optimal task partitioning algorithms significantly affect response time with heterogeneous latencies and compute powers. Motivated by these results, we design, implement, and validate the basic functionalities of MOCHA as a proof-of-concept, and develop algorithms that minimize the overall response time for face recognition. Our experimental results demonstrate that high-powered cloudlets are technically feasible and indeed help reduce overall processing time when face recognition applications run on mobile devices using the cloud as the backend servers.",
"title": ""
},
{
"docid": "fd455e27b023d849c59526655c5060da",
"text": "Face Detection is an important step in any face recognition systems, for the purpose of localizing and extracting face region from the rest of the images. There are many techniques, which have been proposed from simple edge detection techniques to advance techniques such as utilizing pattern recognition approaches. This paper evaluates two methods of face detection, her features and Local Binary Pattern features based on detection hit rate and detection speed. The algorithms were tested on Microsoft Visual C++ 2010 Express with OpenCV library. The experimental results show that Local Binary Pattern features are most efficient and reliable for the implementation of a real-time face detection system.",
"title": ""
},
{
"docid": "ac0dba7ea5465cf3827d04a15f54a01c",
"text": "As humans we live and interact across a wildly diverse set of physical spaces. We each formulate our own personal meaning of place using a myriad of observable cues such as public-private, large-small, daytime-nighttime, loud-quiet, and crowded-empty. Not surprisingly, it is the people with which we share such spaces that dominate our perception of place. Sometimes these people are friends, family and colleagues. More often, and particularly in public urban spaces we inhabit, the individuals who affect us are ones that we repeatedly observe and yet do not directly interact with - our Familiar Strangers. This paper explores our often ignored yet real relationships with Familiar Strangers. We describe several experiments and studies that led to designs for both a personal, body-worn, wireless device and a mobile phone based application that extend the Familiar Stranger relationship while respecting the delicate, yet important, constraints of our feelings and affinities with strangers in pubic places.",
"title": ""
}
] | [
{
"docid": "4ac3affdf995c4bb527229da0feb411d",
"text": "Algorand is a new cryptocurrency that confirms transactions with latency on the order of a minute while scaling to many users. Algorand ensures that users never have divergent views of confirmed transactions, even if some of the users are malicious and the network is temporarily partitioned. In contrast, existing cryptocurrencies allow for temporary forks and therefore require a long time, on the order of an hour, to confirm transactions with high confidence.\n Algorand uses a new Byzantine Agreement (BA) protocol to reach consensus among users on the next set of transactions. To scale the consensus to many users, Algorand uses a novel mechanism based on Verifiable Random Functions that allows users to privately check whether they are selected to participate in the BA to agree on the next set of transactions, and to include a proof of their selection in their network messages. In Algorand's BA protocol, users do not keep any private state except for their private keys, which allows Algorand to replace participants immediately after they send a message. This mitigates targeted attacks on chosen participants after their identity is revealed.\n We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transactions in under a minute, achieves 125x Bitcoin's throughput, and incurs almost no penalty for scaling to more users.",
"title": ""
},
{
"docid": "0ef533dc071fb8ffedae4cb0e675a818",
"text": "BACKGROUND\nMajor social networking platforms, such as Facebook, WhatsApp, and Twitter, have become popular means through which people share health-related information, irrespective of whether messages disseminated through these channels are authentic.\n\n\nOBJECTIVE\nThis study aims to describe the demographic characteristics of patients that may demonstrate their attitudes toward medical information shared on social media networks. Second, we address how information found through social media affects the way people deal with their health. Third, we examine whether patients initiate or alter/discontinue their medications based on information derived from social media.\n\n\nMETHODS\nWe conducted a cross-sectional survey between April and June 2015 on patients attending outpatient clinics at King Abdulaziz University, Jeddah, Saudi Arabia. Patients who used social media (Facebook, WhatsApp, and Twitter) were included. We designed a questionnaire with closed-ended and multiple-choice questions to assess the type of social media platforms patients used and whether information received on these platforms influenced their health care decisions. We used chi-square test to establish the relationship between categorical variables.\n\n\nRESULTS\nOf the 442 patients who filled in the questionnaires, 401 used Facebook, WhatsApp, or Twitter. The majority of respondents (89.8%, 397/442) used WhatsApp, followed by Facebook (58.6%, 259/442) and Twitter (42.3%, 187/442). In most cases, respondents received health-related messages from WhatsApp and approximately 42.6% (171/401) reported ever stopping treatment as advised on a social media platform. A significantly higher proportion of patients without heart disease (P=.001) and obese persons (P=.01) checked the authenticity of information received on social media. Social media messages influenced decision making among patients without heart disease (P=.04). Respondents without heart disease (P=.001) and obese persons (P=.01) were more likely to discuss health-related information received on social media channels with a health care professional. A significant proportion of WhatsApp users reported that health-related information received on this platform influenced decisions regarding their family's health care (P=.001). Respondents' decisions regarding family health care were more likely to be influenced when they used two or all three types of platforms (P=.003).\n\n\nCONCLUSIONS\nHealth education in the digital era needs to be accurate, evidence-based, and regulated. As technologies continue to evolve, we must be equipped to face the challenges it brings with it.",
"title": ""
},
{
"docid": "4d089acf0f7e1bae074fc4d9ad8ee7e3",
"text": "The consequences of exodontia include alveolar bone resorption and ultimately atrophy to basal bone of the edentulous site/ridges. Ridge resorption proceeds quickly after tooth extraction and significantly reduces the possibility of placing implants without grafting procedures. The aims of this article are to describe the rationale behind alveolar ridge augmentation procedures aimed at preserving or minimizing the edentulous ridge volume loss. Because the goal of these approaches is to preserve bone, exodontia should be performed to preserve as much of the alveolar process as possible. After severance of the supra- and subcrestal fibrous attachment using scalpels and periotomes, elevation of the tooth frequently allows extraction with minimal socket wall damage. Extraction sockets should not be acutely infected and be completely free of any soft tissue fragments before any grafting or augmentation is attempted. Socket bleeding that mixes with the grafting material seems essential for success of this procedure. Various types of bone grafting materials have been suggested for this purpose, and some have shown promising results. Coverage of the grafted extraction site with wound dressing materials, coronal flap advancement, or even barrier membranes may enhance wound stability and an undisturbed healing process. Future controlled clinical trials are necessary to determine the ideal regimen for socket augmentation.",
"title": ""
},
{
"docid": "aea4eb371579b66c75c4cc4d51201253",
"text": "Fog computing based radio access network is a promising paradigm for the fifth generation wireless communication system to provide high spectral and energy efficiency. With the help of the new designed fog computing based access points (F-APs), the user-centric objectives can be achieved through the adaptive technique and will relieve the load of fronthaul and alleviate the burden of base band unit pool. In this paper, we derive the coverage probability and ergodic rate for both F-AP users and device-to-device users by taking into account the different nodes locations, cache sizes as well as user access modes. Particularly, the stochastic geometry tool is used to derive expressions for above performance metrics. Simulation results validate the accuracy of our analysis and we obtain interesting tradeoffs that depend on the effect of the cache size, user node density, and the quality of service constrains on the different performance metrics.",
"title": ""
},
{
"docid": "7c86f51a35fbfe0b30c310ca90ca5109",
"text": "In this paper, a phase control scheme for Class-DE-E dc-dc converter is proposed and its performance is clarified. The proposed circuit is composed of phase-controlled Class-DE inverter and Class-E rectifier. The proposed circuit achieves the fixed frequency control without frequency harmonics lower than the switching frequency. Moreover, it is possible to achieve the continuous control in a wide range of the line and load variations. The output voltage decreases in proportion to the increase of the phase shift. The proposed converter keeps the advantages of Class-DE-E dc-dc converter, namely, a high power conversion efficiency under a high-frequency operation and low switch-voltage stress. Especially, high power conversion efficiency can be kept for narrow range control. We present numerical calculations for the design and the numerical analyses to clarify the characteristics of the proposed control. By carrying out circuit experiments, we show a quantitative similarity between the numerical predictions and the experimental results. In our experiments, the measured efficiency is over 84% with 2.5 W output power for 1.0-MHz operating frequency at the nominal operation. Moreover, the output voltage is regulated from 100% to 39%, keeping over 57% power conversion efficiency by using the proposed control scheme.",
"title": ""
},
{
"docid": "8da6cc5c6a8a5d45fadbab8b7ca8b71f",
"text": "Feature detection and description is a pivotal step in many computer vision pipelines. Traditionally, human engineered features have been the main workhorse in this domain. In this paper, we present a novel approach for learning to detect and describe keypoints from images leveraging deep architectures. To allow for a learning based approach, we collect a large-scale dataset of patches with matching multiscale keypoints. The proposed model learns from this vast dataset to identify and describe meaningful keypoints. We evaluate our model for the effectiveness of its learned representations for detecting multiscale keypoints and describing their respective support regions.",
"title": ""
},
{
"docid": "03d0fad1fa59e181a176bdf09b57ba58",
"text": "Steganography refers to techniques that hide information inside innocuous looking objects known as “Cover Objects” such that its very existence remains concealed to any unintended recipient. Images are pervasive in day to day applications and have high redundancy in representation. Thus, they are appealing contenders to be used as cover objects. There are a large number of image steganography techniques proposed till date but negligible research has been done on the development of a standard quality evaluation model for judging their performance. Existence of such a model is important for fueling the development of superior techniques and also paves the way for the improvement of the existing ones. However, the common quality parameters often considered for performance evaluation of an image steganography technique are insufficient for overall quantitative evaluation. This paper proposes a rating scale based quality evaluation model for image steganography algorithms that utilizes both quantitative parameters and observation heuristics. Different image steganography techniques have been evaluated using proposed model and quantitative performance scores for each of the techniques have been derived. The scores have been observed to be in accordance with actual literature and the system is simple, efficient and flexible.",
"title": ""
},
{
"docid": "906e7a5c855597356858e326bd6023db",
"text": "This paper proposes an online transfer framework to capture the interaction among agents and shows that current transfer learning in reinforcement learning is a special case of online transfer. Furthermore, this paper re-characterizes existing agents-teaching-agents methods as online transfer and analyze one such teaching method in three ways. First, the convergence of Q-learning and Sarsa with tabular representation with a finite budget is proven. Second, the convergence of Qlearning and Sarsa with linear function approximation is established. Third, the we show the asymptotic performance cannot be hurt through teaching. Additionally, all theoretical results are empirically validated.",
"title": ""
},
{
"docid": "d880535f198a1f0a26b18572f674b829",
"text": "Human Activity Recognition (HAR) aims to identify the actions performed by humans using signals collected from various sensors embedded in mobile devices. In recent years, deep learning techniques have further improved HAR performance on several benchmark datasets. In this paper, we propose one-dimensional Convolutional Neural Network (1D CNN) for HAR that employs a divide and conquer-based classifier learning coupled with test data sharpening. Our approach leverages a two-stage learning of multiple 1D CNN models; we first build a binary classifier for recognizing abstract activities, and then build two multi-class 1D CNN models for recognizing individual activities. We then introduce test data sharpening during prediction phase to further improve the activity recognition accuracy. While there have been numerous researches exploring the benefits of activity signal denoising for HAR, few researches have examined the effect of test data sharpening for HAR. We evaluate the effectiveness of our approach on two popular HAR benchmark datasets, and show that our approach outperforms both the two-stage 1D CNN-only method and other state of the art approaches.",
"title": ""
},
{
"docid": "c31c77a450d0ae67e19d72f4d352ff45",
"text": "Data stream processing is currently gaining importance due to the developments in novel application areas like escience, e-health, and e-business (considering RFID, for example). Focusing on e-science, it can be observed that scientific experiments and observations in many fields, e. g., in physics and astronomy, create huge volumes of data which have to be interchanged and processed. With experimental and observational data coming in particular from sensors, online simulations, etc., the data has an inherently streaming nature. Furthermore, continuing advances will result in even higher data volumes, rendering storing all of the delivered data prior to processing increasingly impractical. Hence, in such e-science scenarios, processing and sharing of data streams will play a decisive role. It will enable new possibilities for researchers, since they will be able to subscribe to interesting data streams of other scientists without having to set up their own devices or experiments. This results in much better utilization of expensive equipment such as telescopes, satellites, etc. Further, processing and sharing data streams on-the-fly in the network helps to reduce network traffic and to avoid network congestion. Thus, even huge streams of data can be handled efficiently by removing unnecessary parts early on, e. g., by early filtering and aggregation, and by sharing previously generated data streams and processing results. To enable these optimizations, we use Peer-to-Peer (P2P) networking techniques. P2P has gained a lot of attention in the context of exchanging persistent data—in particular for file sharing. In contrast to that, we apply P2P networks for the dissemination of individually subscribed and transformed data streams, allowing for data stream sharing. By using the computational capabilities of peers in the P2P network, we can push data stream transforming operators into the network, thus enabling efficient in-network",
"title": ""
},
{
"docid": "3d9fe9c30d09a9e66f7339b0ad24edb7",
"text": "Due to progress in wired and wireless home networking, sensor networks, networked appliances, mechanical and control engineering, and computers, we can build smart homes, and many smart home projects are currently proceeding throughout the world. However, we have to be careful not to repeat the same mistake that was made with home automation technologies that were booming in the 1970s. That is, [total?] automation should not be a goal of smart home technologies. I believe the following points are important in construction of smart homes from users¿ viewpoints: development of interface technologies between humans and systems for detection of human intensions, feelings, and situations; improvement of system knowledge; and extension of human activity support outside homes to the scopes of communities, towns, and cities.",
"title": ""
},
{
"docid": "00bfce08da755a4e139ae4507ed28141",
"text": "Multiple-view stereo reconstruction is a key step in image-based 3D acquisition and patchmatch based method is suited for large scale scene reconstruction. In this paper we extend the two-view patchmatch stereo to multiple-view in the multiple-view stereo pipeline. The key of the proposed method is to select multiple suitable neighboring images for a reference image, compute the depth-maps and merge the depth-maps. Experimental results on benchmark data sets demonstrate the accuracy and efficiency of the proposed method.",
"title": ""
},
{
"docid": "d2086d9c52ca9d4779a2e5070f9f3009",
"text": "Though action recognition based on complete videos has achieved great success recently, action prediction remains a challenging task as the information provided by partial videos is not discriminative enough for classifying actions. In this paper, we propose a Deep Residual Feature Learning (DeepRFL) framework to explore more discriminative information from partial videos, achieving similar representations as those of complete videos. The proposed method is based on residual learning, which captures the salient differences between partial videos and their corresponding full videos. The partial videos can attain the missing information by learning from features of complete videos and thus improve the discriminative power. Moreover, our model can be trained efficiently in an end-to-end fashion. Extensive evaluations on the challenging UCF101 and HMDB51 datasets demonstrate that the proposed method outperforms state-of-the-art results.",
"title": ""
},
{
"docid": "340719ea1342dd9161377260d0483acf",
"text": "This paper presents a methodology for transforming business designs written in OMG's standard Semantics of Business Vocabulary and Rules (SBVR) framework, into a set of UML models. It involves the transformation of business vocabulary and rules written in SBVR's \"Structured English\" into a set of UML diagrams, which includes Activity Diagram(AD), Sequence Diagram(SD), and Class Diagram(CD). This transformation works by detecting the distinction between rules which will participate in the construction of Activity Diagram and rules which do not. These rules are imperative in nature. The work in the paper also includes the detection of activities embedded implicitly in those rules and establishment of sequence between those activities. These activities incur some action. We also detect their owner and refer to them as the doer of the action. This plays a very important role in the development of Class Diagrams",
"title": ""
},
{
"docid": "5ca70b1db134da98da7bd2cd2a6746b5",
"text": "Pattern matching is essential in applications such as deep-packet inspection (DPI), searching on genomic data, or analyzing medical data. A simple task to do on plaintext data, pattern matching is much harder to do when the privacy of the data must be preserved. Existent solutions involve searchable encryption mechanisms with at least one of these three drawbacks: requiring an exhaustive (and static) list of keywords to be prepared before the data is encrypted (like in symmetric searchable encryption); requiring tokenization, i.e., breaking up the data to search into substrings and encrypting them separately (e.g., like BlindBox); relying on symmetrickey cryptography, thus implying a token-regeneration step for each encrypted-data source (e.g., user). Such approaches are ill-suited for pattern-matching with evolving patterns (e.g., updating virus signatures), variable searchword lengths, or when a single entity must filter ciphertexts from multiple parties. In this work, we introduce Searchable Encryption with Shiftable Trapdoors (SEST): a new primitive that allows for pattern matching with universal tokens (usable by all entities), in which keywords of arbitrary lengths can be matched to arbitrary ciphertexts. Our solution uses public-key encryption and bilinear pairings. It consists of projecting keywords on polynomials of degree equal to the length of the keyword and using a sliding-window-like technique to match the trapdoor to successive fragments of the encrypted data. In addition, very minor modifications to our solution enable it to take into account regular expressions, such as fullyor partly-unknown characters in a keyword (wildcards and interval/subset searches). Our trapdoor size is at most linear in the keyword length (and independent of the plaintext size), and we prove that the leakage to the searcher is only the trivial one: since the searcher learns whether the pattern occurs and where, it can distinguish based on different search results of a single trapdoor on two different plaintexts. To better show the usability of our scheme, we implemented it to run DPI on all the SNORT rules. We show that even for very large plaintexts, our encryption algorithm scales well. The patternmatching algorithm is slightly slower, but extremely parallelizable, and it can thus be run even on very large data. Although our proofs use a (marginally) interactive assumption, we argue that this is a relatively small price to pay for the flexibility and privacy that we are able to attain.",
"title": ""
},
{
"docid": "065740786a7fcb2e63df4103ea0ede59",
"text": "Accumulating glycine betaine through the ButA transport system from an exogenous supply is a survival strategy employed by Tetragenococcus halophilus, a moderate halophilic lactic acid bacterium with crucial role in flavor formation of high-salt food fermentation, to achieve cellular protection. In this study, we firstly confirmed that butA expression was up-regulated under salt stress conditions by quantitative reverse transcription polymerase chain reaction (qRT-PCR). Subsequently, we discovered that recombinant Escherichia coli MKH13 strains with single- and double-copy butA complete expression box(es) showed typical growth curves while they differed in their salt adaption and tolerance. Meanwhile, high-performance liquid chromatography (HPLC) experiments confirmed results obtained from growth curves. In summary, our results indicated that regulation of butA expression was salt-induced and double-copy butA cassettes entrusted a higher ability of salt adaption and tolerance to E. coli MKH13, which implied the potential of muti-copies of butA gene in the genetic modification of T. halophilus for improvement of salt tolerance and better industrial application.",
"title": ""
},
{
"docid": "510cbd4c2a27140f6a8da04fdbc3cb1e",
"text": "Although relevance judgments are fundamental to the design and evaluation of all information retrieval systems, information scientists have not reached a consensus in defining the central concept of relevance. In this paper we ask two questions: What is the meaning of relevance? and What role does relevance play in information behavior? We attempt to address these questions by reviewing literature over the last 30 years that presents various views of relevance as topical, user-oriented, multidimensional, cognitive, and dynamic. We then discuss traditional assumptions on which most research in the field has been based and begin building a case for an approach to the problem of definition based on alternative assumptions. The dynamic, situational approach we suggest views the user-regardless of system-as the central and active determinant of the dimensions of relevance. We believe that relevance is a multidimensional concept; that it is dependent on both internal (cognitive) and external (situational) factors; that it is based on a dynamic human judgment process; and that it is a complex but systematic and mea-",
"title": ""
},
{
"docid": "c3318c1f2750c26fcc518638a6cb52ee",
"text": "The tremendous success of machine learning algorithms at image recognition tasks in recent years intersects with a time of dramatically increased use of electronic medical records and diagnostic imaging. This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the field. The advantage of machine learning in an era of medical big data is that significant hierarchal relationships within the data can be discovered algorithmically without laborious hand-crafting of features. We cover key research areas and applications of medical image classification, localization, detection, segmentation, and registration. We conclude by discussing research obstacles, emerging trends, and possible future directions.",
"title": ""
},
{
"docid": "fdbdac5f319cd46aeb73be06ed64cbb9",
"text": "Recently deep neural networks (DNNs) have been used to learn speaker features. However, the quality of the learned features is not sufficiently good, so a complex back-end model, either neural or probabilistic, has to be used to address the residual uncertainty when applied to speaker verification. This paper presents a convolutional time-delay deep neural network structure (CT-DNN) for speaker feature learning. Our experimental results on the Fisher database demonstrated that this CT-DNN can produce high-quality speaker features: even with a single feature (0.3 seconds including the context), the EER can be as low as 7.68%. This effectively confirmed that the speaker trait is largely a deterministic short-time property rather than a longtime distributional pattern, and therefore can be extracted from just dozens of frames.",
"title": ""
},
{
"docid": "c0ee7bd21a1a261a73f7b831c655ca00",
"text": "NMDA receptors are preeminent neurotransmitter-gated channels in the CNS, which respond to glutamate in a manner that integrates multiple external and internal cues. They belong to the ionotropic glutamate receptor family and fulfil unique and crucial roles in neuronal development and function. These roles depend on characteristic response kinetics, which reflect the operation of the receptors. Here, we review biologically salient features of the NMDA receptor signal and its mechanistic origins. Knowledge of distinctive NMDA receptor biophysical properties, their structural determinants and physiological roles is necessary to understand the physiological and neurotoxic actions of glutamate and to design effective therapeutics.",
"title": ""
}
] | scidocsrr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.