query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
23
negative_passages
listlengths
4
100
subset
stringclasses
7 values
e0551968ae38bf34b3fdc11cc6ee79e9
TCGA Expedition: A Data Acquisition and Management System for TCGA Data
[ { "docid": "cc3788c4690446efe9a0a3eea38ee832", "text": "Papillary thyroid carcinoma (PTC) is the most common type of thyroid cancer. Here, we describe the genomic landscape of 496 PTCs. We observed a low frequency of somatic alterations (relative to other carcinomas) and extended the set of known PTC driver alterations to include EIF1AX, PPM1D, and CHEK2 and diverse gene fusions. These discoveries reduced the fraction of PTC cases with unknown oncogenic driver from 25% to 3.5%. Combined analyses of genomic variants, gene expression, and methylation demonstrated that different driver groups lead to different pathologies with distinct signaling and differentiation characteristics. Similarly, we identified distinct molecular subgroups of BRAF-mutant tumors, and multidimensional analyses highlighted a potential involvement of oncomiRs in less-differentiated subgroups. Our results propose a reclassification of thyroid cancers into molecular subtypes that better reflect their underlying signaling and differentiation properties, which has the potential to improve their pathological classification and better inform the management of the disease.", "title": "" } ]
[ { "docid": "f9eed4f99d70c51dc626a61724540d3c", "text": "A soft-start circuit with soft-recovery function for DC-DC converters is presented in this paper. The soft-start strategy is based on a linearly ramped-up reference and an error amplifier with minimum selector implemented with a three-limb differential pair skillfully. The soft-recovery strategy is based on a compact clamp circuit. The ramp voltage would be clamped once the feedback voltage is detected lower than a threshold, which could control the output to be recovered slowly and linearly. A monolithic DC-DC buck converter with proposed circuit has been fabricated with a 0.5μm CMOS process for validation. The measurement result shows that the ramp-based soft-start and soft-recovery circuit have good performance and agree well with the theoretical analysis.", "title": "" }, { "docid": "0da9197d2f6839d01560b46cbb1fbc8d", "text": "Estimating the traversability of rough terrain is a critical task for an outdoor mobile robot. While classifying structured environment can be learned from large number of training data, it is an extremely difficult task to learn and estimate traversability of unstructured rough terrain. Moreover, in many cases information from a single sensor may not be sufficient for estimating traversability reliably in the absence of artificial landmarks such as lane markings or curbs. Our approach estimates traversability of the terrain and build a 2D probabilistic grid map online using 3D-LIDAR and camera. The combination of LIDAR and camera is favoured in many robotic application because they provide complementary information. Our approach assumes the data captured by these two sensors are independent and build separate traversability maps, each with information captured from one sensor. Traversability estimation with vision sensor autonomously collects training data and update classifier without human intervention as the vehicle traverse the terrain. Traversability estimation with 3D-LIDAR measures the slopes of the ground to predict the traversability. Two independently built probabilistic maps are fused using Bayes' rule to improve the detection performance. This is in contrast with other methods in which each sensor performs different tasks. We have implemented the algorithm on a UGV(Unmanned Ground Vehicle) and tested our approach on a rough terrain to evaluate the detection performance.", "title": "" }, { "docid": "9ebdf3493d6a80d12c97348a2d203d3e", "text": "Agile software development methodologies have been greeted with enthusiasm by many software developers, yet their widespread adoption has also resulted in closer examination of their strengths and weaknesses. While analyses and evaluations abound, the need still remains for an objective and systematic appraisal of Agile processes specifically aimed at defining strategies for their improvement. We provide a review of the strengths and weaknesses identified in Agile processes, based on which a strengths- weaknesses-opportunities-threats (SWOT) analysis of the processes is performed. We suggest this type of analysis as a useful tool for highlighting and addressing the problem issues in Agile processes, since the results can be used as improvement strategies.", "title": "" }, { "docid": "4476e4616e727c9c0f003acebb1a4933", "text": "We argue that the optimization plays a crucial role in generalization of deep learning models through implicit regularization. We do this by demonstrating that generalization ability is not controlled by network size but rather by some other implicit control. We then demonstrate how changing the empirical optimization procedure can improve generalization, even if actual optimization quality is not affected. We do so by studying the geometry of the parameter space of deep networks, and devising an optimization algorithm attuned to this geometry.", "title": "" }, { "docid": "f7c92b4342944a1f937f19b144a61d8a", "text": "Randomization in randomized controlled trials involves more than generation of a random sequence by which to assign subjects. For randomization to be successfully implemented, the randomization sequence must be adequately protected (concealed) so that investigators, involved health care providers, and subjects are not aware of the upcoming assignment. The absence of adequate allocation concealment can lead to selection bias, one of the very problems that randomization was supposed to eliminate. Authors of reports of randomized trials should provide enough details on how allocation concealment was achieved so the reader can determine the likelihood of success. Fortunately, a plan of allocation concealment can always be incorporated into the design of a randomized trial. Certain methods minimize the risk of concealment failing more than others. Keeping knowledge of subjects' assignment after allocation from subjects, investigators/health care providers, or those assessing outcomes is referred to as masking (also known as blinding). The goal of masking is to prevent ascertainment bias. In contrast to allocation concealment, masking cannot always be incorporated into a randomized controlled trial. Both allocation concealment and masking add to the elimination of bias in randomized controlled trials.", "title": "" }, { "docid": "58917e3cbb1542185ac1af9edcf950eb", "text": "The Energy Committee of the Royal Swedish Academy of Sciences has in a series of projects gathered information and knowledge on renewable energy from various sources, both within and outside the academic world. In this article, we synthesize and summarize some of the main points on renewable energy from the various Energy Committee projects and the Committee’s Energy 2050 symposium, regarding energy from water and wind, bioenergy, and solar energy. We further summarize the Energy Committee’s scenario estimates of future renewable energy contributions to the global energy system, and other presentations given at the Energy 2050 symposium. In general, international coordination and investment in energy research and development is crucial to enable future reliance on renewable energy sources with minimal fossil fuel use.", "title": "" }, { "docid": "4d5820e9e137c96d4d63e25772c577c6", "text": "facial topography clinical anatomy of the face upsky facial topography: clinical anatomy of the face by joel e facial topography clinical anatomy of the face [c796.ebook] free ebook facial topography: clinical the anatomy of the aging face: volume loss and changes in facial topographyclinical anatomy of the face ebook facial anatomy mccc dmca / copyrighted works removal title anatomy for plastic surgery thieme medical publishers the face sample quintessence publishing! facial anatomy 3aface academy facial topography clinical anatomy of the face ebook download the face der medizinverlag facial topography clinical anatomy of the face liive facial topography clinical anatomy of the face user clinical anatomy of the head univerzita karlova pdf download the face: pictorial atlas of clinical anatomy clinical anatomy anatomic landmarks for localisation of j m perry co v commissioner internal bouga international journal of anatomy and research, case report anatomy and physiology of the aging neck the clinics topographical anatomy of the head eng nikolaizarovo crc title list: change of ownership a guide to childrens books about asian americans fractography: observing, measuring and interpreting nystce students with disabilities study guide tibca army ranger survival guide compax sharp grill 2 convection manual iwsun nursing diagnosis handbook 9th edition apa citation the surgical management of facial nerve injury lipteh the outermost house alongz cosmetic voted best plastic surgeon in dallas texas c tait a dachau 1933 1945 teleip select your ebook amazon s3 quotation of books all india institute of medical latest ten anatomy acquisitions british dental association lindens complete auto repair reviews mires department of topographic anatomy and operative surgery", "title": "" }, { "docid": "b2895d35c6ffddfb9adc7c1d88cef793", "text": "We develop algorithms for a stochastic appointment sequencing and scheduling problem with waiting time, idle time, and overtime costs. Scheduling surgeries in an operating room motivates the work. The problem is formulated as an integer stochastic program using sample average approximation. A heuristic solution approach based on Benders’ decomposition is developed and compared to exact methods and to previously proposed approaches. Extensive computational testing based on real data shows that the proposed methods produce good results compared to previous approaches. In addition we prove that the finite scenario sample average approximation problem is NP-complete.", "title": "" }, { "docid": "2545a267cedac5924ecfceeddc01a4dc", "text": "The Transport Layer Security (TLS) protocol is a de facto standard of secure client-server communication on the Internet. Its security can be diminished by a variety of attacks that leverage on weaknesses in its design and implementations. An example of a major weakness is the public-key infrastructure (PKI) that TLS deploys, which is a weakest-link system and introduces hundreds of links (i.e., trusted entities). Consequently, an adversary compromising a single trusted entity can impersonate any website. Notary systems, based on multi-path probing, were early and promising proposals to detect and prevent such attacks. Unfortunately, despite their benefits, they are not widely deployed, mainly due to their long-standing unresolved problems. In this paper, we present Persistent and Accountable Domain Validation (PADVA), which is a next-generation TLS notary service. PADVA combines the advantages of previous proposals, enhancing them, introducing novel mechanisms, and leveraging a blockchain platform which provides new features. PADVA keeps notaries auditable and accountable, introduces service-level agreements and mechanisms to enforce them, relaxes availability requirements for notaries, and works with the legacy TLS ecosystem. We implemented and evaluated PADVA, and our experiments indicate its efficiency and deployability.", "title": "" }, { "docid": "355fca41993ea19b08d2a9fc19e25722", "text": "People and companies selling goods or providing services have always desired to know what people think about their products. The number of opinions on the Web has significantly increased with the emergence of microblogs. In this paper we present a novel method for sentiment analysis of a text that allows the recognition of opinions in microblogs which are connected to a particular target or an entity. This method differs from other approaches in utilizing appraisal theory, which we employ for the analysis of microblog posts. The results of the experiments we performed on Twitter showed that our method improves sentiment classification and is feasible even for such specific content as presented on microblogs.", "title": "" }, { "docid": "6e4f0a770fe2a34f99957f252110b6bd", "text": "Universal Dependencies (UD) provides a cross-linguistically uniform syntactic representation, with the aim of advancing multilingual applications of parsing and natural language understanding. Reddy et al. (2016) recently developed a semantic interface for (English) Stanford Dependencies, based on the lambda calculus. In this work, we introduce UDEPLAMBDA, a similar semantic interface for UD, which allows mapping natural language to logical forms in an almost language-independent framework. We evaluate our approach on semantic parsing for the task of question answering against Freebase. To facilitate multilingual evaluation, we provide German and Spanish translations of the WebQuestions and GraphQuestions datasets. Results show that UDEPLAMBDA outperforms strong baselines across languages and datasets. For English, it achieves the strongest result to date on GraphQuestions, with competitive results on WebQuestions.", "title": "" }, { "docid": "95d767d1b9a2ba2aecdf26443b3dd4af", "text": "Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A(-1), linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C(-1) with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids.", "title": "" }, { "docid": "830240e9425b93c354cb9a2be0378961", "text": "Systems for structured knowledge extraction and inference have made giant strides in the last decade. Starting from shallow linguistic tagging and coarse-grained recognition of named entities at the resolution of people, places, organizations, and times, modern systems link billions of pages of unstructured text with knowledge graphs having hundreds of millions of entities belonging to tens of thousands of types, and related by tens of thousands of relations. Via deep learning, systems build continuous representations of words, entities, types, and relations, and use these to continually discover new facts to add to the knowledge graph, and support search systems that go far beyond page-level \"ten blue links''. We will present a comprehensive catalog of the best practices in traditional and deep knowledge extraction, inference and search. We will trace the development of diverse families of techniques, explore their interrelationships, and point out various loose ends.", "title": "" }, { "docid": "4720a84220e37eca1d0c75697f247b23", "text": "We describe a form of nonlinear decomposition that is well-suited for efficient encoding of natural signals. Signals are initially decomposed using a bank of linear filters. Each filter response is then rectified and divided by a weighted sum of rectified responses of neighboring filters. We show that this decomposition, with parameters optimized for the statistics of a generic ensemble of natural images or sounds, provides a good characterization of the nonlinear response properties of typical neurons in primary visual cortex or auditory nerve, respectively. These results suggest that nonlinear response properties of sensory neurons are not an accident of biological implementation, but have an important functional role.", "title": "" }, { "docid": "26cc16cfb31222c7f800ac75a9cbbd13", "text": "In the WZ factorization the outermost parallel loop decreases the number of iterations executed at each step and this changes the amount of parallelism in each step. The aim of the paper is to present four strategies of parallelizing nested loops on multicore architectures on the example of the WZ factorization.", "title": "" }, { "docid": "227a6e820b101073d5621b2f399883a5", "text": "Studying the quality requirements (aka Non-Functional Requirements (NFR)) of a system is crucial in Requirements Engineering. Many software projects fail because of neglecting or failing to incorporate the NFR during the software life development cycle. This paper focuses on analyzing the importance of the quality requirements attributes in software effort estimation models based on the Desharnais dataset. The Desharnais dataset is a collection of eighty one software projects of twelve attributes developed by a Canadian software house. The analysis includes studying the influence of each of the quality requirements attributes, as well as the influence of all quality requirements attributes combined when calculating software effort using regression and Artificial Neural Network (ANN) models. The evaluation criteria used in this investigation include the Mean of the Magnitude of Relative Error (MMRE), the Prediction Level (PRED), Root Mean Squared Error (RMSE), Mean Error and the Coefficient of determination (R). Results show that the quality attribute “Language” is the most statistically significant when calculating software effort. Moreover, if all quality requirements attributes are eliminated in the training stage and software effort is predicted based on software size only, the value of the error (MMRE) is doubled. KeywordsNon-Functional Requirements, Quality Attributes, Software Effort Estimation, Desharnais Dataset", "title": "" }, { "docid": "3d04155f68912f84b02788f93e9da74c", "text": "Data partitioning significantly improves the query performance in distributed database systems. A large number of techniques have been proposed to efficiently partition a dataset for a given query workload. However, many modern analytic applications involve ad-hoc or exploratory analysis where users do not have a representative query workload upfront. Furthermore, workloads change over time as businesses evolve or as analysts gain better understanding of their data. Static workload-based data partitioning techniques are therefore not suitable for such settings. In this paper, we describe the demonstration of Amoeba, a distributed storage system which uses adaptive multi-attribute data partitioning to efficiently support ad-hoc as well as recurring queries. Amoeba applies a robust partitioning algorithm such that ad-hoc queries on all attributes have similar performance gains. Thereafter, Amoeba adaptively repartitions the data based on the observed query sequence, i.e., the system improves over time. All along Amoeba offers both adaptivity (i.e., adjustments according to workload changes) as well as robustness (i.e., avoiding performance spikes due to workload changes). We propose to demonstrate Amoeba on scenarios from an internet-ofthings startup that tracks user driving patterns. We invite the audience to interactively fire fast ad-hoc queries, observe multi-dimensional adaptivity, and play with a robust/reactive knob in Amoeba. The web front end displays the layout changes, runtime costs, and compares it to Spark with both default and workload-aware partitioning.", "title": "" }, { "docid": "66fce3b6c516a4fa4281d19d6055b338", "text": "This paper presents the mechatronic design and experimental validation of a novel powered knee-ankle orthosis for testing torque-driven rehabilitation control strategies. The modular actuator of the orthosis is designed with a torque dense motor and a custom low-ratio transmission (24:1) to provide mechanical transparency to the user, allowing them to actively contribute to their joint kinematics during gait training. The 4.88 kg orthosis utilizes frameless components and light materials, such as aluminum alloy and carbon fiber, to reduce its mass. A human subject experiment demonstrates accurate torque control with high output torque during stance and low backdrive torque during swing at fast walking speeds. This work shows that backdrivability, precise torque control, high torque output, and light weight can be achieved in a powered orthosis without the high cost and complexity of variable transmissions, clutches, and/or series elastic components.", "title": "" }, { "docid": "be5b0dd659434e77ce47034a51fd2767", "text": "Current obstacles in the study of social media marketing include dealing with massive data and real-time updates have motivated to contribute solutions that can be adopted for viral marketing. Since information diffusion and social networks are the core of viral marketing, this article aims to investigate the constellation of diffusion methods for viral marketing. Studies on diffusion methods for viral marketing have applied different computational methods, but a systematic investigation of these methods has limited. Most of the literature have focused on achieving objectives such as influence maximization or community detection. Therefore, this article aims to conduct an in-depth review of works related to diffusion for viral marketing. Viral marketing has applied to business-to-consumer transactions but has seen limited adoption in business-to-business transactions. The literature review reveals a lack of new diffusion methods, especially in dynamic and large-scale networks. It also offers insights into applying various mining methods for viral marketing. It discusses some of the challenges, limitations, and future research directions of information diffusion for viral marketing. The article also introduces a viral marketing information diffusion model. The proposed model attempts to solve the dynamicity and large-scale data of social networks by adopting incremental clustering and a stochastic differential equation for business-to-business transactions. Keywords—information diffusion; viral marketing; social media marketing; social networks", "title": "" }, { "docid": "5d704992b738a084215f520ed8074d6b", "text": "Recognizing and generating paraphrases is an important component in many natural language processing applications. A wellestablished technique for automatically extracting paraphrases leverages bilingual corpora to find meaning-equivalent phrases in a single language by “pivoting” over a shared translation in another language. In this paper we revisit bilingual pivoting in the context of neural machine translation and present a paraphrasing model based purely on neural networks. Our model represents paraphrases in a continuous space, estimates the degree of semantic relatedness between text segments of arbitrary length, or generates candidate paraphrases for any source input. Experimental results across tasks and datasets show that neural paraphrases outperform those obtained with conventional phrase-based pivoting approaches.", "title": "" } ]
scidocsrr
c2a66311a52540d3d774a90ba0a62b49
Simpler but More Accurate Semantic Dependency Parsing
[ { "docid": "a5052a27ebbfb07b02fa18b3d6bff6fc", "text": "Popular techniques for domain adaptation such as the feature augmentation method of Daumé III (2009) have mostly been considered for sparse binary-valued features, but not for dense realvalued features such as those used in neural networks. In this paper, we describe simple neural extensions of these techniques. First, we propose a natural generalization of the feature augmentation method that uses K + 1 LSTMs where one model captures global patterns across all K domains and the remaining K models capture domain-specific information. Second, we propose a novel application of the framework for learning shared structures by Ando and Zhang (2005) to domain adaptation, and also provide a neural extension of their approach. In experiments on slot tagging over 17 domains, our methods give clear performance improvement over Daumé III (2009) applied on feature-rich CRFs.", "title": "" }, { "docid": "4207c7f69d65c5b46abce85a369dada1", "text": "We present a novel approach, called selectional branching, which uses confidence estimates to decide when to employ a beam, providing the accuracy of beam search at speeds close to a greedy transition-based dependency parsing approach. Selectional branching is guaranteed to perform a fewer number of transitions than beam search yet performs as accurately. We also present a new transition-based dependency parsing algorithm that gives a complexity of O(n) for projective parsing and an expected linear time speed for non-projective parsing. With the standard setup, our parser shows an unlabeled attachment score of 92.96% and a parsing speed of 9 milliseconds per sentence, which is faster and more accurate than the current state-of-the-art transitionbased parser that uses beam search.", "title": "" } ]
[ { "docid": "8ef80a3ae74ab4d53bad33aa79d469fd", "text": "One of the most prolific topics of research in the field of computer vision is pattern detection in images. A large number of practical applications for face detection exist. Contemporary work even suggests that any of the results from specialized detectors can be approximated by using fast detection classifiers. In this project, we developed an algorithm which detected faces from the input image with a lower false detection rate and lower computation cost using the ensemble effects of computer vision concepts. This algorithm utilized the concepts of recognizing skin color, filtering the binary image, detecting blobs and extracting different features from the face. The result is supported by the statistics obtained from calculating the parameters defining the parts of the face. The project also implements the highly powerful concept of Support Vector Machine that is used for the classification of images into face and non-face class. This classification is based on the training data set and indicators of luminance value, chrominance value, saturation value, elliptical value and eye and mouth map values.", "title": "" }, { "docid": "0eb659fd66ad677f90019f7214aae7e8", "text": "In this article a relational database schema for a bibliometric database is developed. After the introduction explaining the motivation to use relational databases in bibliometrics, an overview of the related literature is given. A review of typical bibliometric questions serves as an informal requirement analysis. The database schema is developed as an entity-relationship diagram using the structural information typically found in scientific articles. Several SQL queries for the tasks presented in the requirement analysis show the usefulness of the developed database schema.", "title": "" }, { "docid": "96412f11cdde09eddaf4397d2573278f", "text": "The repeated lifting of heavy weights has been identified as a risk factor for low back pain (LBP). Whether squat lifting leads to lower spinal loads than stoop lifting and whether lifting a weight laterally results in smaller forces than lifting the same weight in front of the body remain matters of debate. Instrumented vertebral body replacements (VBRs) were used to measure the in vivo load in the lumbar spine in three patients at level L1 and in one patient at level L3. Stoop lifting and squat lifting were compared in 17 measuring sessions, in which both techniques were performed a total of 104 times. The trunk inclination and amount of knee bending were simultaneously estimated from recorded images. Compared with the aforementioned lifting tasks, the patients additionally lifted a weight laterally with one hand 26 times. Only a small difference (4%) in the measured resultant force was observed between stoop lifting and squat lifting, although the knee-bending angle (stoop 10°, squat 45°) and trunk inclination (stoop 52°, squat 39°) differed considerably at the time points of maximal resultant forces. Lifting a weight laterally caused 14% less implant force on average than lifting the same weight in front of the body. The current in vivo biomechanical study does not provide evidence that spinal loads differ substantially between stoop and squat lifting. The anterior-posterior position of the lifted weight relative to the spine appears to be crucial for spinal loading.", "title": "" }, { "docid": "3171587b5b4554d151694f41206bcb4e", "text": "Embedded systems are ubiquitous in society and can contain information that could be used in criminal cases for example in a serious road traffic accident where the car management systems could provide vital forensic information concerning the engine speed etc. A critical review of a number of methods and procedures for the analysis of embedded systems were compared against a ‘standard’ methodology for use in a Forensic Computing Investigation. A Unified Forensic Methodology (UFM) has been developed that is forensically sound and capable of dealing with the analysis of a wide variety of Embedded Systems.", "title": "" }, { "docid": "70d8886cdd027663856565cbe8707a97", "text": "Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify. Existing methods for crafting adversarial examples are based on L2 and L∞ distortion metrics. However, despite the fact that L1 distortion accounts for the total variation and encourages sparsity in the perturbation, little has been developed for crafting L1-based adversarial examples. In this paper, we formulate the process of attacking DNNs via adversarial examples as an elastic-net regularized optimization problem. Our elastic-net attacks to DNNs (EAD) feature L1oriented adversarial examples and include the state-of-the-art L2 attack as a special case. Experimental results on MNIST, CIFAR10 and ImageNet show that EAD can yield a distinct set of adversarial examples with small L1 distortion and attains similar attack performance to the state-of-the-art methods in different attack scenarios. More importantly, EAD leads to improved attack transferability and complements adversarial training for DNNs, suggesting novel insights on leveraging L1 distortion in adversarial machine learning and security implications of DNNs.", "title": "" }, { "docid": "2028a642f0965a1cdd8c61c97153cee5", "text": "Design procedures for three-stage CMOS operational transconductance amplifiers employing nested-Miller frequency compensation are presented in this paper. After describing the basic methodology on a Class-A topology, some modifications, to increase swing, slew-rate and current drive capability, are subsequently discussed for a Class-AB solution. The approaches developed are simple as they do not introduce unnecessary circuit constraints and yield accurate results. They are hence suited for a pencil-and-paper design, but can be easily integrated into an analog knowledge-based computer-aided design tool. Experimental prototypes, designed in a 0.35-mum technology by following the proposed procedures, were fabricated and tested. Measurement results were found in close agreement with the target specifications", "title": "" }, { "docid": "c0315ef3bcc21723131d9b2687a5d5d1", "text": "Network covert timing channels embed secret messages in legitimate packets by modulating interpacket delays. Unfortunately, such channels are normally implemented in higher network layers (layer 3 or above) and easily detected or prevented. However, access to the physical layer of a network stack allows for timing channels that are virtually invisible: Sub-microsecond modulations that are undetectable by software endhosts. Therefore, covert timing channels implemented in the physical layer can be a serious threat to the security of a system or a network. In fact, we empirically demonstrate an effective covert timing channel over nine routing hops and thousands of miles over the Internet (the National Lambda Rail). Our covert timing channel works with cross traffic, less than 10% bit error rate, which can be masked by forward error correction, and a covert rate of 81 kilobits per second. Key to our approach is access and control over every bit in the physical layer of a 10 Gigabit network stack (a bit is 100 picoseconds wide at 10 gigabit per seconds), which allows us to modulate and interpret interpacket spacings at sub-microsecond scale. We discuss when and how a timing channel in the physical layer works, how hard it is to detect such a channel, and what is required to do so.", "title": "" }, { "docid": "11cfcc959d0839f0150cc7353eb56327", "text": "In subspace clustering, a group of data points belonging to a union of subspaces are assigned membership to their respective subspaces. This paper presents a new approach dubbed Innovation Pursuit (iPursuit) to the problem of subspace clustering using a new geometrical idea whereby subspaces are identified based on their relative novelties. We present two frameworks in which the idea of innovation pursuit is used to distinguish the subspaces. Underlying the first framework is an iterative method that finds the subspaces consecutively by solving a series of simple linear optimization problems, each searching for a direction of innovation in the span of the data potentially orthogonal to all subspaces except for the one to be identified in one step of the algorithm. A detailed mathematical analysis is provided establishing sufficient conditions for iPursuit to correctly cluster the data. The proposed approach can provably yield exact clustering even when the subspaces have significant intersections. It is shown that the complexity of the iterative approach scales only linearly in the number of data points and subspaces, and quadratically in the dimension of the subspaces. The second framework integrates iPursuit with spectral clustering to yield a new variant of spectral-clustering-based algorithms. The numerical simulations with both real and synthetic data demonstrate that iPursuit can often outperform the state-of-the-art subspace clustering algorithms, more so for subspaces with significant intersections, and that it significantly improves the state-of-the-art result for subspace-segmentation-based face clustering.", "title": "" }, { "docid": "e7cd57b352c86505304c47cda31e9177", "text": "We introduce a new shape descriptor, the shape context , for measuring shape similarity and recovering point correspondences. The shape context describes the coarse arrangement of the shape with respect to a point inside or on the boundary of the shape. We use the shape context as a vector-valued attribute in a bipartite graph matching framework. Our proposed method makes use of a relatively small number of sample points selected from the set of detected edges; no special landmarks or keypoints are necessary. Tolerance and/or invariance to common image transformations are available within our framework. Using examples involving both silhouettes and edge images, we demonstrate how the solution to the graph matching problem provides us with correspondences and a dissimilarity score that can be used for object recognition and similarity-based retrieval.", "title": "" }, { "docid": "8cc5229b417117db652bde55766f11bb", "text": "We develop a method for the stabilization of mechanical systems with symmetry based on the technique of controlled Lagrangians. The procedure involves making structured modifications to the Lagrangian for the uncontrolled system, thereby constructing the controlled Lagrangian. The Euler–Lagrange equations derived from the controlled Lagrangian describe the closed-loop system, where new terms in these equations are identified with control forces. Since the controlled system is Lagrangian by construction, energy methods can be used to find control gains that yield closed-loop stability. In this paper we usekinetic shapingto preserve symmetry and only stabilize systems modulo the symmetry group. In the sequel to this paper (Part II), we extend the technique to includepotential shaping and we achieve stabilization in the full phase space. The procedure is demonstrated for several underactuated balance problems, including the stabilization of an inverted planar pendulum on a cart moving on a line and an inverted spherical pendulum on a cart moving in the plane.", "title": "" }, { "docid": "d880349c2760a8cd71d86ea3212ba1f0", "text": "As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.", "title": "" }, { "docid": "c117da74c302d9e108970854d79e54fd", "text": "Entailment recognition is a primary generic task in natural language inference, whose focus is to detect whether the meaning of one expression can be inferred from the meaning of the other. Accordingly, many NLP applications would benefit from high coverage knowledgebases of paraphrases and entailment rules. To this end, learning such knowledgebases from the Web is especially appealing due to its huge size as well as its highly heterogeneous content, allowing for a more scalable rule extraction of various domains. However, the scalability of state-of-the-art entailment rule acquisition approaches from the Web is still limited. We present a fully unsupervised learning algorithm for Webbased extraction of entailment relations. We focus on increased scalability and generality with respect to prior work, with the potential of a large-scale Web-based knowledgebase. Our algorithm takes as its input a lexical–syntactic template and searches the Web for syntactic templates that participate in an entailment relation with the input template. Experiments show promising results, achieving performance similar to a state-of-the-art unsupervised algorithm, operating over an offline corpus, but with the benefit of learning rules for different domains with no additional effort.", "title": "" }, { "docid": "8dfeae1304eb97bc8f7d872af7aaa795", "text": "Encouraged by the recent progress in pedestrian detection, we investigate the gap between current state-of-the-art methods and the \"perfect single frame detector\". We enable our analysis by creating a human baseline for pedestrian detection (over the Caltech dataset), and by manually clustering the recurrent errors of a top detector. Our results characterise both localisation and background-versusforeground errors. To address localisation errors we study the impact of training annotation noise on the detector performance, and show that we can improve even with a small portion of sanitised training data. To address background/foreground discrimination, we study convnets for pedestrian detection, and discuss which factors affect their performance. Other than our in-depth analysis, we report top performance on the Caltech dataset, and provide a new sanitised set of training and test annotations.", "title": "" }, { "docid": "51bed6a9474603f79f44ebfc4815f33c", "text": "The adoption of metamaterials in the development of terahertz (THz) antennas has led to tremendous progresses in the THz field. In this paper, a reconfigurable THz patch antenna based on graphene is presented, whose resonance frequency can be changed depending on the applied voltage. By using an array of split ring resonators (SRR) also made of graphene, both bandwidth and radiation properties are enhanced; it is found that both the resonance frequency and bandwidth change with the applied voltage.", "title": "" }, { "docid": "799acb6577fc2ab8fa477a1ce30d37a9", "text": "Breast feeding practice especially exclusive breast feeding (EBF) is a major determinant of child growth and development. In Tanzania, most women breastfeed their infants for long periods, but many introduce alternative feeding too early in life. The objective of this study was to determine factors affecting EBF and the relationship between feeding practices and the nutritional status of infants. This cross-sectional survey, using a semi-structured questionnaire, was conducted in Morogoro Municipality in Tanzania. The study involved lactating women recruited from five randomly selected health facilities. Demographic, clinical, knowledge and practices related to infant feeding as well as infant anthropometric information were collected. Infant nutritional status was assessed based on weight-for-age, height-for-age and weight- for- height. There were wide variations in knowledge and practice of breastfeeding among women. Majority (92%) of the respondents gave colostrums to infants although more than 50% did not know its benefits. Eight percent of the respondents discarded colostrums on the account that it is not good for their neonates. Only 23.1% of the respondents thought that infants should be breastfed exclusively during the first six months of infancy. Ninety-eight percent of infants < 1 month of age received breast milk only, compared with 28.5% of infants aged 2-3 months and 22.3% among those who were above 3 months of age. No child in the ≥ 4 months old was exclusively breastfed. Over 80% of the infants had normal weights, 13% were stunted and 8% wasted. EBF was associated with higher scores for height- for- age Z (P < 0.05) and weight- for- height Z (P < 0.01). Age, education level and occupation of respondents were important predictors of EBF. Overall, breast feeding practices in the study population were largely suboptimal. As a result, considerable proportions of children had poor health indicators. Thus, correct breastfeeding practices should be supported and promoted to improve the well-being of children.", "title": "" }, { "docid": "32f6db1bf35da397cd61d744a789d49c", "text": "Mushroom poisoning is the main cause of mortality in food poisoning incidents in China. Although some responsible mushroom species have been identified, some were identified inaccuratly. This study investigated and analyzed 102 mushroom poisoning cases in southern China from 1994 to 2012, which involved 852 patients and 183 deaths, with an overall mortality of 21.48 %. The results showed that 85.3 % of poisoning cases occurred from June to September, and involved 16 species of poisonous mushroom: Amanita species (A. fuliginea, A. exitialis, A. subjunquillea var. alba, A. cf. pseudoporphyria, A. kotohiraensis, A. neoovoidea, A. gymnopus), Galerina sulciceps, Psilocybe samuiensis, Russula subnigricans, R. senecis, R. japonica, Chlorophyllum molybdites, Paxillus involutus, Leucocoprinus cepaestipes and Pulveroboletus ravenelii. Six species (A. subjunquillea var. alba, A. cf. pseudoporphyria, A. gymnopus, R. japonica, Psilocybe samuiensis and Paxillus involutus) are reported for the first time in poisoning reports from China. Psilocybe samuiensis is a newly recorded species in China. The genus Amanita was responsible for 70.49 % of fatalities; the main lethal species were A. fuliginea and A. exitialis. Russula subnigricans caused 24.59 % of fatalities, and five species showed mortality >20 % (A. fuliginea, A. exitialis, A. subjunquillea var. alba, R. subnigricans and Paxillus involutus). Mushroom poisoning symptoms were classified from among the reported clinical symptoms. Seven types of mushroom poisoning symptoms were identified for clinical diagnosis and treatment in China, including gastroenteritis, acute liver failure, acute renal failure, psychoneurological disorder, hemolysis, rhabdomyolysis and photosensitive dermatitis.", "title": "" }, { "docid": "bff9eec742aa44d4dd585f04806d5018", "text": "In this work a machine vision system capable of analysing underwater videos for detecting, tracking and counting fish is presented. The real-time videos, collected near the Ken-Ding sub-tropical coral reef waters are managed by EcoGrid, Taiwan and are barely analysed by marine biologists. The video processing system consists of three subsystems: the video texture analysis, fish detection and tracking modules. Fish detection is based on two algorithms computed independently, whose results are combined in order to obtain a more accurate outcome. The tracking was carried out by the application of the CamShift algorithm that enables the tracking of objects whose numbers may vary over time. Unlike existing fish-counting methods, our approach provides a reliable method in which the fish number is computed in unconstrained environments and under several scenarios (murky water, algae on camera lens, moving plants, low contrast, etc.). The proposed approach was tested with 20 underwater videos, achieving an overall accuracy as high as 85%.", "title": "" }, { "docid": "fa62c54cf22c7d0822c7a4171a3d8bcd", "text": "Interaction with robot systems for specification of manufacturing tasks and motions needs to be simple, to enable wide-spread use of robots in SMEs. In the best case, existing practices from manual work could be used, to smoothly let current employees start using robot technology as a natural part of their work. Our aim is to simplify the robot programming task by allowing the user to simply make technical drawings on a sheet of paper. Craftsman use paper and raw sketches for several situations; to share ideas, to get a better imagination or to remember the customer situation. Currently these sketches have either to be interpreted by the worker when producing the final product by hand, or transferred into CAD file using an according tool. The former means that no automation is included, the latter means extra work and much experience in using the CAD tool. Our approach is to use the digital pen and paper from Anoto as input devices for SME robotic tasks, thereby creating simpler and more user friendly alternatives for programming, parameterization and commanding actions. To this end, the basic technology has been investigated and fully working prototypes have been developed to explore the possibilities and limitation in the context of typical SME applications. Based on the encouraging experimental results, we believe that drawings on digital paper will, among other means of human-robot interaction, play an important role in manufacturing SMEs in the future. Index Terms — CAD, Human machine interfaces, Industrial Robots, Robot programming.", "title": "" }, { "docid": "b51e706aacdf95819e5f6747f7dd6b12", "text": "The goal of this research is to develop a functional adhesive robot skin with micro suction cups, which realizes two new functions: adaptive adhesion to rough/curved surfaces and anisotropic adhesion. Both functions are realized by integration of asymmetric micro cups. This skin can be applied to various robot mechanisms such as robot hands, wall-climbing robot feet and so on as a kind of robot skins. This paper especially reports the concept of this adhesive robot skin and its fundamental characteristics. The experiments show the developed skin realizes novel characteristics, high adhesion even on rough surface and anisotropic adhesion.", "title": "" }, { "docid": "6ef04225b5f505a48127594a12fef112", "text": "For differential operators of order 2, this paper presents a new method that combines generalized exponents to find those solutions that can be represented in terms of Bessel functions.", "title": "" } ]
scidocsrr
ebfd1d12f8f1dc683b8f95c46cb5881d
PyMT: a post-WIMP multi-touch user interface toolkit
[ { "docid": "b992e02ee3366d048bbb4c30a2bf822c", "text": "Structured graphics models such as Scalable Vector Graphics (SVG) enable designers to create visually rich graphics for user interfaces. Unfortunately current programming tools make it difficult to implement advanced interaction techniques for these interfaces. This paper presents the Hierarchical State Machine Toolkit (HsmTk), a toolkit targeting the development of rich interactions. The key aspect of the toolkit is to consider interactions as first-class objects and to specify them with hierarchical state machines. This approach makes the resulting behaviors self-contained, easy to reuse and easy to modify. Interactions can be attached to graphical elements without knowing their detailed structure, supporting the parallel refinement of the graphics and the interaction.", "title": "" }, { "docid": "f69ba8c401cd61057888dfa023bfee30", "text": "Since its introduction, the Nintendo Wii remote has become one of the world's most sophisticated and common input devices. Combining its impressive capability with a low cost and high degree of accessibility make it an ideal platform for exploring a variety of interaction research concepts. The author describes the technology inside the Wii remote, existing interaction techniques, what's involved in creating custom applications, and several projects ranging from multiobject tracking to spatial augmented reality that challenge the way its developers meant it to be used.", "title": "" } ]
[ { "docid": "9b9a2a9695f90a6a9a0d800192dd76f6", "text": "Due to high competition in today's business and the need for satisfactory communication with customers, companies understand the inevitable necessity to focus not only on preventing customer churn but also on predicting their needs and providing the best services for them. The purpose of this article is to predict future services needed by wireless users, with data mining techniques. For this purpose, the database of customers of an ISP in Shiraz, which logs the customer usage of wireless internet connections, is utilized. Since internet service has three main factors to define (Time, Speed, Traffics) we predict each separately. First, future service demand is predicted by implementing a simple Recency, Frequency, Monetary (RFM) as a basic model. Other factors such as duration from first use, slope of customer's usage curve, percentage of activation, Bytes In, Bytes Out and the number of retries to establish a connection and also customer lifetime value are considered and added to RFM model. Then each one of R, F, M criteria is alternately omitted and the result is evaluated. Assessment is done through analysis node which determines the accuracy of evaluated data among partitioned data. The result shows that CART and C5.0 are the best algorithms to predict future services in this case. As for the features, depending upon output of each features, duration and transfer Bytes are the most important after RFM. An ISP may use the model discussed in this article to meet customers' demands and ensure their loyalty and satisfaction.", "title": "" }, { "docid": "3b2aa97c0232857dffa971d9c040d430", "text": "This paper provides a critical analysis of Mobile Learning projects published before the end of 2007. The review uses a Mobile Learning framework to evaluate and categorize 102 Mobile Learning projects, and to briefly introduce exemplary projects for each category. All projects were analysed with the criteria: context, tools, control, communication, subject and objective. Although a significant number of projects have ventured to incorporate the physical context into the learning experience, few projects include a socializing context. Tool support ranges from pure content delivery to content construction by the learners. Although few projects explicitly discuss the Mobile Learning control issues, one can find all approaches from pure teacher control to learner control. Despite the fact that mobile phones initially started as a communication device, communication and collaboration play a surprisingly small role in Mobile Learning projects. Most Mobile Learning projects support novices, although one might argue that the largest potential is supporting advanced learners. All results show the design space and reveal gaps in Mobile Learning research.", "title": "" }, { "docid": "903d00a02846450ebd18a8ce865889b5", "text": "The ability to solve probability word problems such as those found in introductory discrete mathematics textbooks, is an important cognitive and intellectual skill. In this paper, we develop a two-step endto-end fully automated approach for solving such questions that is able to automatically provide answers to exercises about probability formulated in natural language. In the first step, a question formulated in natural language is analysed and transformed into a highlevel model specified in a declarative language. In the second step, a solution to the high-level model is computed using a probabilistic programming system. On a dataset of 2160 probability problems, our solver is able to correctly answer 97.5% of the questions given a correct model. On the end-toend evaluation, we are able to answer 12.5% of the questions (or 31.1% if we exclude examples not supported by design).", "title": "" }, { "docid": "36e4a38e31c7715cd8f7754076b89223", "text": "We investigate the effectiveness of semantic generalizations/classifications for capturing the regularities of the behavior of verbs in terms of their metaphoricity. Starting from orthographic word unigrams, we experiment with various ways of defining semantic classes for verbs (grammatical, resource-based, distributional) and measure the effectiveness of these classes for classifying all verbs in a running text as metaphor or non metaphor.", "title": "" }, { "docid": "450a0ffcd35400f586e766d68b75cc98", "text": "While there has been a success in 2D human pose estimation with convolutional neural networks (CNNs), 3D human pose estimation has not been thoroughly studied. In this paper, we tackle the 3D human pose estimation task with end-to-end learning using CNNs. Relative 3D positions between one joint and the other joints are learned via CNNs. The proposed method improves the performance of CNN with two novel ideas. First, we added 2D pose information to estimate a 3D pose from an image by concatenating 2D pose estimation result with the features from an image. Second, we have found that more accurate 3D poses are obtained by combining information on relative positions with respect to multiple joints, instead of just one root joint. Experimental results show that the proposed method achieves comparable performance to the state-of-the-art methods on Human 3.6m dataset.", "title": "" }, { "docid": "be95384cb710593dd7c620becff334be", "text": "1College of Mathematics and Informatics, Fujian Normal University, Fuzhou 350117, Fujian, China 2College of Computer and Information, Hohai University, Nanjing 211100, Jiangsu, China 3Jiangsu Key Laboratory of Big Data Security & Intelligent Processing, Nanjing University of Posts and Telecommunications, China 4Mathematics and Computer Science Department, Gannan Normal University, Ganzhou 341000, Jiangxi, China", "title": "" }, { "docid": "23bf81699add38814461d5ac3e6e33db", "text": "This paper examined a steering behavior based fatigue monitoring system. The advantages of using steering behavior for detecting fatigue are that these systems measure continuously, cheaply, non-intrusively, and robustly even under extremely demanding environmental conditions. The expected fatigue induced changes in steering behavior are a pattern of slow drifting and fast corrective counter steering. Using advanced signal processing procedures for feature extraction, we computed 3 feature set in the time, frequency and state space domain (a total number of 1251 features) to capture fatigue impaired steering patterns. Each feature set was separately fed into 5 machine learning methods (e.g. Support Vector Machine, K-Nearest Neighbor). The outputs of each single classifier were combined to an ensemble classification value. Finally we combined the ensemble values of 3 feature subsets to a of meta-ensemble classification value. To validate the steering behavior analysis, driving samples are taken from a driving simulator during a sleep deprivation study (N=12). We yielded a recognition rate of 86.1% in classifying slight from strong fatigue.", "title": "" }, { "docid": "49b0ba019f6f968804608aeacec2a959", "text": "In this paper, we introduce a novel problem of audio-visual event localization in unconstrained videos. We define an audio-visual event as an event that is both visible and audible in a video segment. We collect an Audio-Visual Event (AVE) dataset to systemically investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. We develop an audio-guided visual attention mechanism to explore audio-visual correlations, propose a dual multimodal residual network (DMRN) to fuse information over the two modalities, and introduce an audio-visual distance learning network to handle the cross-modality localization. Our experiments support the following findings: joint modeling of auditory and visual modalities outperforms independent modeling, the learned attention can capture semantics of sounding objects, temporal alignment is important for audio-visual fusion, the proposed DMRN is effective in fusing audio-visual features, and strong correlations between the two modalities enable cross-modality localization.", "title": "" }, { "docid": "a81b08428081cd15e7c705d5a6e79a6f", "text": "Attributes act as intermediate representations that enable parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function that measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. Label embedding enjoys a built-in ability to leverage alternative sources of information instead of or in addition to attributes, such as, e.g., class hierarchies or textual descriptions. Moreover, label embedding encompasses the whole range of learning settings from zero-shot learning to regular learning with a large number of labeled examples.", "title": "" }, { "docid": "376c9736ccd7823441fd62c46eee0242", "text": "Description: Infrastructure for Homeland Security Environments Wireless Sensor Networks helps readers discover the emerging field of low-cost standards-based sensors that promise a high order of spatial and temporal resolution and accuracy in an ever-increasing universe of applications. It shares the latest advances in science and engineering paving the way towards a large plethora of new applications in such areas as infrastructure protection and security, healthcare, energy, food safety, RFID, ZigBee, and processing. Unlike other books on wireless sensor networks that focus on limited topics in the field, this book is a broad introduction that covers all the major technology, standards, and application topics. It contains everything readers need to know to enter this burgeoning field, including current applications and promising research and development; communication and networking protocols; middleware architecture for wireless sensor networks; and security and management. The straightforward and engaging writing style of this book makes even complex concepts and processes easy to follow and understand. In addition, it offers several features that help readers grasp the material and then apply their knowledge in designing their own wireless sensor network systems: Examples illustrate how concepts are applied to the development and application of wireless sensor networks Detailed case studies set forth all the steps of design and implementation needed to solve real-world problems Chapter conclusions that serve as an excellent review by stressing the chapter's key concepts References in each chapter guide readers to in-depth discussions of individual topics This book is ideal for networking designers and engineers who want to fully exploit this new technology and for government employees who are concerned about homeland security. With its examples, it is appropriate for use as a coursebook for upper-level undergraduates and graduate students.", "title": "" }, { "docid": "2756c08346bfeafaed177a6bf1fde09e", "text": "Current implementations of Internet systems are very hard to be upgraded. The ossification of existing standards restricts the development of more advanced communication systems. New research initiatives, such as virtualization, software-defined radios, and software-defined networks, allow more flexibility for networks. However, until now, those initiatives have been developed individually. We advocate that the convergence of these overlying and complementary technologies can expand the amount of programmability on the network and support different innovative applications. Hence, this paper surveys the most recent research initiatives on programmable networks. We characterize programmable networks, where programmable devices execute specific code, and the network is separated into three planes: data, control, and management planes. We discuss the modern programmable network architectures, emphasizing their research issues, and, when possible, highlight their practical implementations. We survey the wireless and wired elements on the programmable data plane. Next, on the programmable control plane, we survey the divisor and controller elements. We conclude with final considerations, open issues and future challenges.", "title": "" }, { "docid": "3c1c9644df655b2a96fc593bd2982da2", "text": "We present the IIT Bombay English-Hindi Parallel Corpus. The corpus is a compilation of parallel corpora previously available in the public domain as well as new parallel corpora we collected. The corpus contains 1.49 million parallel segments, of which 694k segments were not previously available in the public domain. The corpus has been pre-processed for machine translation, and we report baseline phrase-based SMT and NMT translation results on this corpus. This corpus has been used in two editions of shared tasks at the Workshop on Asian Language Translation (2016 and 2017). The corpus is freely available for non-commercial research. To the best of our knowledge, this is the largest publicly available English-Hindi parallel corpus.", "title": "" }, { "docid": "688fde854293b0902911d967c5e0a906", "text": "As Internet users increasingly rely on social media sites like Facebook and Twitter to receive news, they are faced with a bewildering number of news media choices. For example, thousands of Facebook pages today are registered and categorized as some form of news media outlets. Inferring the bias (or slant) of these media pages poses a difficult challenge for media watchdog organizations that traditionally rely on con-", "title": "" }, { "docid": "cf1431a2f97fae07128ebac0c727941c", "text": "Laser microscopy has generally poor temporal resolution, caused by the serial scanning of each pixel. This is a significant problem for imaging or optically manipulating neural circuits, since neuronal activity is fast. To help surmount this limitation, we have developed a \"scanless\" microscope that does not contain mechanically moving parts. This microscope uses a diffractive spatial light modulator (SLM) to shape an incoming two-photon laser beam into any arbitrary light pattern. This allows the simultaneous imaging or photostimulation of different regions of a sample with three-dimensional precision. To demonstrate the usefulness of this microscope, we perform two-photon uncaging of glutamate to activate dendritic spines and cortical neurons in brain slices. We also use it to carry out fast (60 Hz) two-photon calcium imaging of action potentials in neuronal populations. Thus, SLM microscopy appears to be a powerful tool for imaging and optically manipulating neurons and neuronal circuits. Moreover, the use of SLMs expands the flexibility of laser microscopy, as it can substitute traditional simple fixed lenses with any calculated lens function.", "title": "" }, { "docid": "a56efa3471bb9e3091fffc6b1585f689", "text": "Rogowski current transducers combine a high bandwidth, an easy to use thin flexible coil, and low insertion impedance making them an ideal device for measuring pulsed currents in power electronic applications. Practical verification of a Rogowski transducer's ability to measure current transients due to the fastest MOSFET and IGBT switching requires a calibrated test facility capable of generating a pulse with a rise time of the order of a few 10's ns. A flexible 8-module system has been built which gives a 2000A peak current with a rise time of 40ns. The modular approach enables verification for a range of transducer coil sizes and ratings.", "title": "" }, { "docid": "40fe24e70fd1be847e9f89b82ff75b28", "text": "Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multitask learning. Specifically, we propose a new sharing unit: \"cross-stitch\" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples.", "title": "" }, { "docid": "ae961e9267b1571ec606347f56b0d4ca", "text": "A benchmark turbulent Backward Facing Step (BFS) airflow was studied in detail through a program of tightly coupled experimental and CFD analysis. The theoretical and experimental approaches were developed simultaneously in a “building block” approach and the results used to verify each “block”. Information from both CFD and experiment was used to develop confidence in the accuracy of each technique and to increase our understanding of the BFS flow.", "title": "" }, { "docid": "0fdd7f5c5cd1225567e89b456ef25ea0", "text": "In this work we propose a hierarchical approach for labeling semantic objects and regions in scenes. Our approach is reminiscent of early vision literature in that we use a decomposition of the image in order to encode relational and spatial information. In contrast to much existing work on structured prediction for scene understanding, we bypass a global probabilistic model and instead directly train a hierarchical inference procedure inspired by the message passing mechanics of some approximate inference procedures in graphical models. This approach mitigates both the theoretical and empirical difficulties of learning probabilistic models when exact inference is intractable. In particular, we draw from recent work in machine learning and break the complex inference process into a hierarchical series of simple machine learning subproblems. Each subproblem in the hierarchy is designed to capture the image and contextual statistics in the scene. This hierarchy spans coarse-to-fine regions and explicitly models the mixtures of semantic labels that may be present due to imperfect segmentation. To avoid cascading of errors and overfitting, we train the learning problems in sequence to ensure robustness to likely errors earlier in the inference sequence and leverage the stacking approach developed by Cohen et al.", "title": "" }, { "docid": "109644763e3a5ee5f59ec8e83719cc8d", "text": "The field of Natural Language Processing (NLP) is growing rapidly, with new research published daily along with an abundance of tutorials, codebases and other online resources. In order to learn this dynamic field or stay up-to-date on the latest research, students as well as educators and researchers must constantly sift through multiple sources to find valuable, relevant information. To address this situation, we introduce TutorialBank, a new, publicly available dataset which aims to facilitate NLP education and research. We have manually collected and categorized over 6,300 resources on NLP as well as the related fields of Artificial Intelligence (AI), Machine Learning (ML) and Information Retrieval (IR). Our dataset is notably the largest manually-picked corpus of resources intended for NLP education which does not include only academic papers. Additionally, we have created both a search engine 1 and a command-line tool for the resources and have annotated the corpus to include lists of research topics, relevant resources for each topic, prerequisite relations among topics, relevant subparts of individual resources, among other annotations. We are releasing the dataset and present several avenues for further research.", "title": "" }, { "docid": "851a966bbfee843e5ae1eaf21482ef87", "text": "The Pittsburgh Sleep Quality Index (PSQI) is a widely used measure of sleep quality in adolescents, but information regarding its psychometric strengths and weaknesses in this population is limited. In particular, questions remain regarding whether it measures one or two sleep quality domains. The aims of the present study were to (a) adapt the PSQI for use in adolescents and young adults, and (b) evaluate the psychometric properties of the adapted measure in this population. The PSQI was slightly modified to make it more appropriate for use in youth populations and was translated into Spanish for administration to the sample population available to the study investigators. It was then administered with validity criterion measures to a community-based sample of Spanish adolescents and young adults (AYA) between 14 and 24 years old (N = 216). The results indicated that the questionnaire (AYA-PSQI-S) assesses a single factor. The total score evidenced good convergent and divergent validity and moderate reliability (Cronbach's alpha = .72). The AYA-PSQI-S demonstrates adequate psychometric properties for use in clinical trials involving adolescents and young adults. Additional research to further evaluate the reliability and validity of the measure for use in clinical settings is warranted.", "title": "" } ]
scidocsrr
c5635211f4b70ed2d9f4e5c7e90d6f99
What Do Different Evaluation Metrics Tell Us About Saliency Models?
[ { "docid": "289694f2395a6a2afc7d86d475b9c02d", "text": "Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a finegrained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60% of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding.", "title": "" }, { "docid": "37a8fe29046ec94d54e62f202a961129", "text": "Detection of salient image regions is useful for applications like image segmentation, adaptive compression, and region-based image retrieval. In this paper we present a novel method to determine salient regions in images using low-level features of luminance and color. The method is fast, easy to implement and generates high quality saliency maps of the same size and resolution as the input image. We demonstrate the use of the algorithm in the segmentation of semantically meaningful whole objects from digital images.", "title": "" } ]
[ { "docid": "a066ff1b4dfa65a67b79200366021542", "text": "OBJECTIVES\nWe sought to assess the shave biopsy technique, which is a new surgical procedure for complete removal of longitudinal melanonychia. We evaluated the quality of the specimen submitted for pathological examination, assessed the postoperative outcome, and ascertained its indication between the other types of matrix biopsies.\n\n\nDESIGN\nThis was a retrospective study performed at the dermatologic departments of the Universities of Liège and Brussels, Belgium, of 30 patients with longitudinal or total melanonychia.\n\n\nRESULTS\nPathological diagnosis was made in all cases; 23 patients were followed up during a period of 6 to 40 months. Seventeen patients had no postoperative nail plate dystrophy (74%) but 16 patients had recurrence of pigmentation (70%).\n\n\nLIMITATIONS\nThis was a retrospective study.\n\n\nCONCLUSIONS\nShave biopsy is an effective technique for dealing with nail matrix lesions that cause longitudinal melanonychia over 4 mm wide. Recurrence of pigmentation is the main drawback of the procedure.", "title": "" }, { "docid": "72d38fa8fc9ff402b3ee422a9967e537", "text": "With the continuing growth of modern communications technology, demand for image transmission and storage is increasing rapidly. Advances in computer technology for mass storage and digital processing have paved the way for implementing advanced data compression techniques to improve the efficiency of transmission and storage of images. In this paper a large variety of algorithms for image data compression are considered. Starting with simple techniques of sampling and pulse code modulation (PCM), state of the art algorithms for two-dimensional data transmission are reviewed. Topics covered include differential PCM (DPCM) and predictive coding, transform coding, hybrid coding, interframe coding, adaptive techniques, and applications. Effects of channel errors and other miscellaneous related topics are also considered. While most of the examples and image models have been specialized for visual images, the techniques discussed here could be easily adapted more generally for multidimensional data compression. Our emphasis here is on fundamentals of the various techniques. A comprehensive bibliography with comments is included for a reader interested in further details of the theoretical and experimental results discussed here.", "title": "" }, { "docid": "632fd895e8920cd9b25b79c9d4bd4ef4", "text": "In minimally invasive surgery, instruments are inserted from the exterior of the patient’s body into the surgical field inside the body through the minimum incision, resulting in limited visibility, accessibility, and dexterity. To address this problem, surgical instruments with articulated joints and multiple degrees of freedom have been developed. The articulations in currently available surgical instruments use mainly wire or link mechanisms. These mechanisms are generally robust and reliable, but the miniaturization of the mechanical parts required often results in problems with size, weight, durability, mechanical play, sterilization, and assembly costs. We thus introduced a compliant mechanism to a laparoscopic surgical instrument with multiple degrees of freedom at the tip. To show the feasibility of the concept, we developed a prototype with two degrees of freedom articulated surgical instruments that can perform the grasping and bending movements. The developed prototype is roughly the same size of the conventional laparoscopic instrument, within the diameter of 4 mm. The elastic parts were fabricated by Ni-Ti alloy and SK-85M, rigid parts ware fabricated by stainless steel, covered by 3D- printed ABS resin. The prototype was designed using iterative finite element method analysis, and has a minimal number of mechanical parts. The prototype showed hysteresis in grasping movement presumably due to the friction; however, the prototype showed promising mechanical characteristics and was fully functional in two degrees of freedom. In addition, the prototype was capable to exert over 15 N grasping that is sufficient for the general laparoscopic procedure. The evaluation tests thus positively showed the concept of the proposed mechanism. The prototype showed promising characteristics in the given mechanical evaluation experiments. Use of a compliant mechanism such as in our prototype may contribute to the advancement of surgical instruments in terms of simplicity, size, weight, dexterity, and affordability.", "title": "" }, { "docid": "84569374aa1adb152aee714d053b082d", "text": "PURPOSE\nTo describe the insertions of the superficial medial collateral ligament (sMCL) and posterior oblique ligament (POL) and their related osseous landmarks.\n\n\nMETHODS\nInsertions of the sMCL and POL were identified and marked in 22 unpaired human cadaveric knees. The surface area, location, positional relations, and morphology of the sMCL and POL insertions and related osseous structures were analyzed on 3-dimensional images.\n\n\nRESULTS\nThe femoral insertion of the POL was located 18.3 mm distal to the apex of the adductor tubercle (AT). The femoral insertion of the sMCL was located 21.1 mm distal to the AT and 9.2 mm anterior to the POL. The angle between the femoral axis and femoral insertion of the sMCL was 18.6°, and that between the femoral axis and the POL insertion was 5.1°. The anterior portions of the distal fibers of the POL were attached to the fascia cruris and semimembranosus tendon, whereas the posterior fibers were attached to the posteromedial side of the tibia directly. The tibial insertion of the POL was located just proximal and medial to the superior edge of the semimembranosus groove. The tibial insertion of the sMCL was attached firmly and widely to the tibial crest. The mean linear distances between the tibial insertion of the POL or sMCL and joint line were 5.8 and 49.6 mm, respectively.\n\n\nCONCLUSIONS\nThis study used 3-dimensional images to assess the insertions of the sMCL and POL and their related osseous landmarks. The AT was identified clearly as an osseous landmark of the femoral insertions of the sMCL and POL. The tibial crest and semimembranosus groove served as osseous landmarks of the tibial insertions of the sMCL and POL.\n\n\nCLINICAL RELEVANCE\nBy showing further details of the anatomy of the knee, the described findings can assist surgeons in anatomic reconstruction of the sMCL and POL.", "title": "" }, { "docid": "152e5d8979eb1187e98ecc0424bb1fde", "text": "Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model (DGPLVM), named GaussianFace, for face verification. In contrast to relying unrealistically on a single training data source, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. To enhance discriminative power, we introduced a more efficient equivalent form of Kernel Fisher Discriminant Analysis to DGPLVM. To speed up the process of inference and prediction, we exploited the low rank approximation method. Extensive experiments demonstrated the effectiveness of the proposed model in learning from diverse data sources and generalizing to unseen domains. Specifically, the accuracy of our algorithm achieved an impressive accuracy rate of 98.52% on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53%) on LFW is surpassed.", "title": "" }, { "docid": "2490ad05628f62881e16338914135d17", "text": "The authors examined the hypothesis that judgments of learning (JOL), if governed by processing fluency during encoding, should be insensitive to the anticipated retention interval. Indeed, neither item-by-item nor aggregate JOLs exhibited \"forgetting\" unless participants were asked to estimate recall rates for several different retention intervals, in which case their estimates mimicked closely actual recall rates. These results and others reported suggest that participants can access their knowledge about forgetting but only when theory-based predictions are made, and then only when the notion of forgetting is accentuated either by manipulating retention interval within individuals or by framing recall predictions in terms of forgetting rather than remembering. The authors interpret their findings in terms of the distinction between experience-based and theory-based JOLs.", "title": "" }, { "docid": "29e07bf313daaa3f6bf1d67224f6e4b6", "text": "An overview of the high-frequency reflectometer technology deployed in Anritsu’s VectorStar Vector Network Analyzer (VNA) family is given, leading to a detailed description of the architecture used to extend the frequency range of VectorStar into the high millimeter waves. It is shown that this technology results in miniature frequency-extension modules that provide unique capabilities such as direct connection to wafer probes, dense multi-port measurements, test-port power leveling, enhanced raw directivity, and reduced measurement complexity when compared with existing solutions. These capabilities, combined with the frequency-scalable nature of the reflectometers provide users with a unique and compelling solution for their current and future high-frequency measurement needs.", "title": "" }, { "docid": "c72940e6154fa31f6bedca17336f8a94", "text": "Following on from ecological theories of perception, such as the one proposed by [Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin] this paper reviews the literature on the multisensory interactions underlying the perception of flavor in order to determine the extent to which it is really appropriate to consider flavor perception as a distinct perceptual system. We propose that the multisensory perception of flavor may be indicative of the fact that the taxonomy currently used to define our senses is simply not appropriate. According to the view outlined here, the act of eating allows the different qualities of foodstuffs to be combined into unified percepts; and flavor can be used as a term to describe the combination of tastes, smells, trigeminal, and tactile sensations as well as the visual and auditory cues, that we perceive when tasting food.", "title": "" }, { "docid": "5e5a2edef28c24197df309b37d892b81", "text": "Systemic lupus erythematosus (SLE) is a chronic autoimmune disease and its pathogenesis is unknown. SLE is regulated by complement receptors, proteins and antibodies such as complement receptor 2 (CR2/CD21), anti-dsDNA antibodies, Cysteine p Guanidine DNA (CpG DNA), toll-like receptor 9 (TLR9), interluekin-6 (IL-6), and interferon(IFN-α). Upon activation of plasmacytoid dendritic cells by bacterial CpG DNA or synthetic CpG ODN, these ligands binds to the cell surface CR2 and TLR9 to generate pro inflammatory cytokines via through NF-kB. In this, binding of these ligands induces releases of IFN-α from the plasmacytoid dendritic cells which further binds to IFN-α 1 & 2 receptors present on B cells. This binding was not completely blocked by an anti-IFNαR1 inhibitory antibody, indicating that the released IFN-α may partially binds to the CR2 present on the surface of B cells. IFN-α and IL-6 released from B cells was partially blocked by anti-CR2 inhibitory mAb171. These studies suggested that the cell surface CR2 partially involved in binding these ligands to generate pro inflammatory cytokines. More importantly these CpG DNA or CpG ODN predominantly binds to the cell surface/cellular TLR9 on B cells in order to induce the release of IL-6 and IFN-α, and other pro-inflammatory cytokines. This review describes how the bacterial CpG DNA/CpG motif/ CpG ODN regulate the innate immune system through B cell surface CR2 and TLR9 in B cell signaling.", "title": "" }, { "docid": "da989da66f8c2019adf49eae97fc2131", "text": "Psychedelic drugs are making waves as modern trials support their therapeutic potential and various media continue to pique public interest. In this opinion piece, we draw attention to a long-recognised component of the psychedelic treatment model, namely ‘set’ and ‘setting’ – subsumed here under the umbrella term ‘context’. We highlight: (a) the pharmacological mechanisms of classic psychedelics (5-HT2A receptor agonism and associated plasticity) that we believe render their effects exceptionally sensitive to context, (b) a study design for testing assumptions regarding positive interactions between psychedelics and context, and (c) new findings from our group regarding contextual determinants of the quality of a psychedelic experience and how acute experience predicts subsequent long-term mental health outcomes. We hope that this article can: (a) inform on good practice in psychedelic research, (b) provide a roadmap for optimising treatment models, and (c) help tackle unhelpful stigma still surrounding these compounds, while developing an evidence base for long-held assumptions about the critical importance of context in relation to psychedelic use that can help minimise harms and maximise potential benefits.", "title": "" }, { "docid": "0d9420b97012ce445fdf39fb009e32c4", "text": "Greater numbers of young children with complicated, serious physical health, mental health, or developmental problems are entering foster care during the early years when brain growth is most active. Every effort should be made to make foster care a positive experience and a healing process for the child. Threats to a child’s development from abuse and neglect should be understood by all participants in the child welfare system. Pediatricians have an important role in assessing the child’s needs, providing comprehensive services, and advocating on the child’s behalf. The developmental issues important for young children in foster care are reviewed, including: 1) the implications and consequences of abuse, neglect, and placement in foster care on early brain development; 2) the importance and challenges of establishing a child’s attachment to caregivers; 3) the importance of considering a child’s changing sense of time in all aspects of the foster care experience; and 4) the child’s response to stress. Additional topics addressed relate to parental roles and kinship care, parent-child contact, permanency decision-making, and the components of comprehensive assessment and treatment of a child’s development and mental health needs. More than 500 000 children are in foster care in the United States.1,2 Most of these children have been the victims of repeated abuse and prolonged neglect and have not experienced a nurturing, stable environment during the early years of life. Such experiences are critical in the shortand long-term development of a child’s brain and the ability to subsequently participate fully in society.3–8 Children in foster care have disproportionately high rates of physical, developmental, and mental health problems1,9 and often have many unmet medical and mental health care needs.10 Pediatricians, as advocates for children and their families, have a special responsibility to evaluate and help address these needs. Legal responsibility for establishing where foster children live and which adults have custody rests jointly with the child welfare and judiciary systems. Decisions about assessment, care, and planning should be made with sufficient information about the particular strengths and challenges of each child. Pediatricians have an important role in helping to develop an accurate, comprehensive profile of the child. To create a useful assessment, it is imperative that complete health and developmental histories are available to the pediatrician at the time of these evaluations. Pediatricians and other professionals with expertise in child development should be proactive advisors to child protection workers and judges regarding the child’s needs and best interests, particularly regarding issues of placement, permanency planning, and medical, developmental, and mental health treatment plans. For example, maintaining contact between children and their birth families is generally in the best interest of the child, and such efforts require adequate support services to improve the integrity of distressed families. However, when keeping a family together may not be in the best interest of the child, alternative placement should be based on social, medical, psychological, and developmental assessments of each child and the capabilities of the caregivers to meet those needs. Health care systems, social services systems, and judicial systems are frequently overwhelmed by their responsibilities and caseloads. Pediatricians can serve as advocates to ensure each child’s conditions and needs are evaluated and treated properly and to improve the overall operation of these systems. Availability and full utilization of resources ensure comprehensive assessment, planning, and provision of health care. Adequate knowledge about each child’s development supports better placement, custody, and treatment decisions. Improved programs for all children enhance the therapeutic effects of government-sponsored protective services (eg, foster care, family maintenance). The following issues should be considered when social agencies intervene and when physicians participate in caring for children in protective services. EARLY BRAIN AND CHILD DEVELOPMENT More children are entering foster care in the early years of life when brain growth and development are most active.11–14 During the first 3 to 4 years of life, the anatomic brain structures that govern personality traits, learning processes, and coping with stress and emotions are established, strengthened, and made permanent.15,16 If unused, these structures atrophy.17 The nerve connections and neurotransmitter networks that are forming during these critical years are influenced by negative environmental conditions, including lack of stimulation, child abuse, or violence within the family.18 It is known that emotional and cognitive disruptions in the early lives of children have the potential to impair brain development.18 Paramount in the lives of these children is their need for continuity with their primary attachment figures and a sense of permanence that is enhanced The recommendations in this statement do not indicate an exclusive course of treatment or serve as a standard of medical care. Variations, taking into account individual circumstances, may be appropriate. PEDIATRICS (ISSN 0031 4005). Copyright © 2000 by the American Acad-", "title": "" }, { "docid": "871298644bc8b7187a20a4803ec7e723", "text": "Intrinsic video decomposition refers to the fundamentally ambiguous task of separating a video stream into its constituent layers, in particular reflectance and shading layers. Such a decomposition is the basis for a variety of video manipulation applications, such as realistic recoloring or retexturing of objects. We present a novel variational approach to tackle this underconstrained inverse problem at real-time frame rates, which enables on-line processing of live video footage. The problem of finding the intrinsic decomposition is formulated as a mixed variational ℓ2-ℓp-optimization problem based on an objective function that is specifically tailored for fast optimization. To this end, we propose a novel combination of sophisticated local spatial and global spatio-temporal priors resulting in temporally coherent decompositions at real-time frame rates without the need for explicit correspondence search. We tackle the resulting high-dimensional, non-convex optimization problem via a novel data-parallel iteratively reweighted least squares solver that runs on commodity graphics hardware. Real-time performance is obtained by combining a local-global solution strategy with hierarchical coarse-to-fine optimization. Compelling real-time augmented reality applications, such as recoloring, material editing and retexturing, are demonstrated in a live setup. Our qualitative and quantitative evaluation shows that we obtain high-quality real-time decompositions even for challenging sequences. Our method is able to outperform state-of-the-art approaches in terms of runtime and result quality -- even without user guidance such as scribbles.", "title": "" }, { "docid": "b2e1b184096433db2bbd46cf01ef99c6", "text": "This is a short overview of a totally ordered broadcast protocol used by ZooKeeper, called Zab. It is conceptually easy to understand, is easy to implement, and gives high performance. In this paper we present the requirements ZooKeeper makes on Zab, we show how the protocol is used, and we give an overview of how the protocol works.", "title": "" }, { "docid": "083d621f946cf3ec5fdead536446c23f", "text": "When deciding whether two stimuli rotated in space are identical or mirror reversed, subjects employ mental rotation to solve the task. In children mental rotation can be trained by extensive repetition of the task, but the improvement seems to rely on the retrieval of previously learned stimuli. We assumed that due to the close relation between mental and manual rotation in children a manual training should improve the mental rotation process itself. The manual training we developed indeed ameliorated mental rotation and the training effect was not limited to learned stimuli. While boys outperformed girls in the mental rotation test before the manual rotation training, we found no gender differences in the results of the manual rotation task. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3500278940baaf6f510ad47463cbf5ed", "text": "Different word embedding models capture different aspects of linguistic properties. This inspired us to propose a model (MMaxLSTM-CNN) for employing multiple sets of word embeddings for evaluating sentence similarity/relation. Representing each word by multiple word embeddings, the MaxLSTM-CNN encoder generates a novel sentence embedding. We then learn the similarity/relation between our sentence embeddings via Multi-level comparison. Our method M-MaxLSTMCNN consistently shows strong performances in several tasks (i.e., measure textual similarity, identify paraphrase, recognize textual entailment). According to the experimental results on STS Benchmark dataset and SICK dataset from SemEval, M-MaxLSTM-CNN outperforms the state-of-the-art methods for textual similarity tasks. Our model does not use hand-crafted features (e.g., alignment features, Ngram overlaps, dependency features) as well as does not require pretrained word embeddings to have the same dimension.", "title": "" }, { "docid": "ee0d858955c3c45ac3d990d3ad9d56ed", "text": "Survival analysis is a subfield of statistics where the goal is to analyze and model data where the outcome is the time until an event of interest occurs. One of the main challenges in this context is the presence of instances whose event outcomes become unobservable after a certain time point or when some instances do not experience any event during the monitoring period. This so-called censoring can be handled most effectively using survival analysis techniques. Traditionally, statistical approaches have been widely developed in the literature to overcome the issue of censoring. In addition, many machine learning algorithms have been adapted to deal with such censored data and tackle other challenging problems that arise in real-world data. In this survey, we provide a comprehensive and structured review of the statistical methods typically used and the machine learning techniques developed for survival analysis, along with a detailed taxonomy of the existing methods. We also discuss several topics that are closely related to survival analysis and describe several successful applications in a variety of real-world application domains. We hope that this article will give readers a more comprehensive understanding of recent advances in survival analysis and offer some guidelines for applying these approaches to solve new problems arising in applications involving censored data.", "title": "" }, { "docid": "7de911386f69397afe76e427e7ae3997", "text": "Photonic crystal slabs are a versatile and important platform for molding the flow of light. In this thesis, we consider ways to control the emission of light from photonic crystal slab structures, specifically focusing on directional, asymmetric emission, and on emitting light with interesting topological features. First, we develop a general coupled-mode theory formalism to derive bounds on the asymmetric decay rates to top and bottom of a photonic crystal slab, for a resonance with arbitrary in-plane wavevector. We then employ this formalism to inversionsymmetric structures, and show through numerical simulations that asymmetries of top-down decay rates exceeding 104 can be achieved by tuning the resonance frequency to coincide with the perfectly transmitting Fabry-Perot frequency. The emission direction can also be rapidly switched from top to bottom by tuning the wavevector or frequency. We then consider the generation of Mobius strips of light polarization, i.e. vector beams with half-integer polarization winding, from photonic crystal slabs. We show that a quadratic degeneracy formed by symmetry considerations can be split into a pair of Dirac points, which can be further split into four exceptional points. Through calculations of an analytical two-band model and numerical simulations of two-dimensional photonic crystals and photonic crystal slabs, we demonstrate the existence of isofrequency contours encircling two exceptional points, and show the half-integer polarization winding along these isofrequency contours. We further propose a realistic photonic crystal slab structure and experimental setup to verify the existence of such Mobius strips of light polarization. Thesis Supervisor: Marin Solja-id Title: Professor of Physics and MacArthur Fellow", "title": "" }, { "docid": "c346ddfd1247d335c1a45d094ae2bb60", "text": "In this paper we introduce a novel approach for stereoscopic rendering of virtual environments with a wide Field-of-View (FoV) up to 360°. Handling such a wide FoV implies the use of non-planar projections and generates specific problems such as for rasterization and clipping of primitives. We propose a novel pre-clip stage specifically adapted to geometric approaches for which problems occur with polygons spanning across the projection discontinuities. Our approach integrates seamlessly with immersive virtual reality systems as it is compatible with stereoscopy, head-tracking, and multi-surface projections. The benchmarking of our approach with different hardware setups could show that it is well compliant with real-time constraint, and capable of displaying a wide range of FoVs. Thus, our geometric approach could be used in various VR applications in which the user needs to extend the FoV and apprehend more visual information.", "title": "" }, { "docid": "94ea8b56e8ade27c15e8603606003874", "text": "Mistry A, et al. Arch Dis Child Educ Pract Ed 2017;0:1–3. doi:10.1136/archdischild-2017-312905 A woman was admitted for planned induction at 39+5 weeks gestation. This was her third pregnancy. She had two previous children who were fit and well. Antenatal scans showed a fetal intra-abdominal mass measuring 6.2×5.5×7 cm in the lower abdomen, which was compressing the bladder. The mass was thought to be originating from the ovary or the bowel. On postnatal examination, the baby girl had a distended and full abdomen. There was a right-sided abdominal mass palpable above the umbilicus and 3 cm in size. It was firm, smooth and mobile in consistency. She had a normal anus and external female genitalia, with evidence of a prolapsed vagina on crying. She had passed urine and opened her bowels. The baby was kept nil by mouth and on intravenous fluids until the abdominal radiography was performed. The image is shown in figure 1.", "title": "" }, { "docid": "401aa3faf42ccdc2d63f5d76bd7092e4", "text": "We introduce a Markov-model-based framework for Moving Target Defense (MTD) analysis. The framework allows modeling of a broad range of MTD strategies, provides general theorems about how the probability of a successful adversary defeating an MTD strategy is related to the amount of time/cost spent by the adversary, and shows how a multilevel composition of MTD strategies can be analyzed by a straightforward combination of the analysis for each one of these strategies. Within the proposed framework we define the concept of security capacity which measures the strength or effectiveness of an MTD strategy: the security capacity depends on MTD specific parameters and more general system parameters. We apply our framework to two concrete MTD strategies.", "title": "" } ]
scidocsrr
41024f70f912f9cd77714a8823688ba2
An ensemble classifier system for early diagnosis of acute lymphoblastic leukemia in blood microscopic images
[ { "docid": "48b14b78512a8f63d3a9dcdf70d88182", "text": "A cute lymphocytic leukemia (ALL) is a malignant disease characterized by the accumulation of lymphoblast in the bone marrow. An improved scheme for ALL detection in blood microscopic images is presented here. In this study features i.e. hausdorff dimension and contour signature are employed to classify a lymphocytic cell in the blood image into normal lymphocyte or lymphoblast (blasts). In addition shape and texture features are also extracted for better classification. Initial segmentation is done using K-means clustering which segregates leukocytes or white blood cells (WBC) from other blood components i.e. erythrocytes and platelets. The results of K-means are used for evaluating individual cell shape, texture and other features for final detection of leukemia. Fractal features i.e. hausdorff dimension is implemented for measuring perimeter roughness and hence classifying a lymphocytic cell nucleus. A total of 108 blood smear images were considered for feature extraction and final performance evaluation is validated with the results of a hematologist.", "title": "" } ]
[ { "docid": "70e34d4ccd294d7811e344616638a3af", "text": "The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. Attribute learning has emerged as a promising paradigm for bridging the semantic gap and addressing data sparsity via transferring attribute knowledge in object recognition and relatively simple action classification. In this paper, we address the task of attribute learning for understanding multimedia data with sparse and incomplete labels. In particular, we focus on videos of social group activities, which are particularly challenging and topical examples of this task because of their multimodal content and complex and unstructured nature relative to the density of annotations. To solve this problem, we 1) introduce a concept of semilatent attribute space, expressing user-defined and latent attributes in a unified framework, and 2) propose a novel scalable probabilistic topic model for learning multimodal semilatent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multimedia sparse data learning tasks including: multitask learning, learning with label noise, N-shot transfer learning, and importantly zero-shot learning.", "title": "" }, { "docid": "22d153c01c82117466777842724bbaca", "text": "State-of-the-art photovoltaics use high-purity, large-area, wafer-scale single-crystalline semiconductors grown by sophisticated, high-temperature crystal growth processes. We demonstrate a solution-based hot-casting technique to grow continuous, pinhole-free thin films of organometallic perovskites with millimeter-scale crystalline grains. We fabricated planar solar cells with efficiencies approaching 18%, with little cell-to-cell variability. The devices show hysteresis-free photovoltaic response, which had been a fundamental bottleneck for the stable operation of perovskite devices. Characterization and modeling attribute the improved performance to reduced bulk defects and improved charge carrier mobility in large-grain devices. We anticipate that this technique will lead the field toward synthesis of wafer-scale crystalline perovskites, necessary for the fabrication of high-efficiency solar cells, and will be applicable to several other material systems plagued by polydispersity, defects, and grain boundary recombination in solution-processed thin films.", "title": "" }, { "docid": "58703ec280887ebdcaeba826bf719b62", "text": "The management and conservation of the world's oceans require synthesis of spatial data on the distribution and intensity of human activities and the overlap of their impacts on marine ecosystems. We developed an ecosystem-specific, multiscale spatial model to synthesize 17 global data sets of anthropogenic drivers of ecological change for 20 marine ecosystems. Our analysis indicates that no area is unaffected by human influence and that a large fraction (41%) is strongly affected by multiple drivers. However, large areas of relatively little human impact remain, particularly near the poles. The analytical process and resulting maps provide flexible tools for regional and global efforts to allocate conservation resources; to implement ecosystem-based management; and to inform marine spatial planning, education, and basic research.", "title": "" }, { "docid": "b93983990101a9dbd363a5d0aa2e4088", "text": "BPMN is an emerging standard for process modelling and has the potential to become a process specification language to capture and exchange process models between stakeholders and tools. Ongoing research and standardisation efforts target a formal behavioural semantics and metamodel. Yet it is hardly specified how humans are embedded in the processes and how the work distribution among human resources can be defined. This paper addresses these issues by identifying the required model information based on the Workflow Resource Patterns. We evaluate BPMN and the upcoming metamodel standard (BPDM) for their capabilities and propose extensions.", "title": "" }, { "docid": "5e8f88f95910e3dbea995108450f8166", "text": "This paper summarizes ongoing research in NLP (Natural Language Processing) driven citation analysis and describes experiments and motivating examples of how this work can be used to enhance traditional scientometrics analysis that is based on simply treating citations as a “vote” from the citing paper to cited paper. In particular, we describe our dataset for citation polarity and citation purpose, present experimental results on the automatic detection of these indicators, and demonstrate the use of such annotations for studying research dynamics and scientific summarization. We also look at two complementary problems that show up in NLP driven citation analysis for a specific target paper. The first problem is extracting citation context, the implicit citation sentences that do not contain explicit anchors to the target paper. The second problem is extracting reference scope, the target relevant segment of a complicated citing sentence that cites multiple papers. We show how these tasks can be helpful in improving sentiment analysis and citation based summarization. ∗This research was conducted while the authors were at University of Michigan. 2 Rahul Jha and others", "title": "" }, { "docid": "8da42ecb961c885e7e744d15bb79c812", "text": "Danielle S. Bassett, Perry Zurn, and Joshua I. Gold Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, 19104 Department of Physics & Astronomy, College of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, 19104 Department of Electrical & Systems Engineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, 19104 Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104 Department of Philosophy, American University, Washington, DC, 20016 Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104 and To whom correspondence should be addressed: dsb@seas.upenn.edu", "title": "" }, { "docid": "5bdf4585df04c00ebcf00ce94a86ab38", "text": "High-voltage pulse-generators can be used effectively for bacterial decontamination in water treatment applications. Applying a pulsed electric field to the infected water sample guarantees killing of harmful germs and bacteria. In this paper, a modular high-voltage pulse-generator with sequential charging is proposed for water treatment via underwater pulsed streamer corona discharge. The proposed generator consists of series-connected modules similar to an arm of a modular multilevel converter. The modules' capacitors are charged sequentially from a relatively low-voltage dc supply, then they are connected in series and discharged into the load. Two configurations are proposed in this paper, one for low repetitive pulse rate applications, and the other for high repetitive pulse rates. In the first topology, the equivalent resistance of the infected water sample is used as a charging resistance for the generator's capacitors during the charging process. While in the second topology, the water resistance is bypassed during the charging process, and an external charging resistance with proper value is used instead. In this paper, detailed designs for the proposed pulse-generators are presented and validated by simulation results using MATLAB. A scaled down experimental setup has been built to show the viability of the proposed concept.", "title": "" }, { "docid": "a7d9c920e0cd2521a8df341841c44db4", "text": "bstract. We propose a chromatic aberration (CA) reduction techique that removes artifacts caused by lateral CA and longitudinal A, simultaneously. In general, most visible CA-related artifacts apear locally in the neighborhoods of strong edges. Because these rtifacts usually have local characteristics, they cannot be removed ell by regular global warping methods. Therefore, we designed a onlinear partial differential equation (PDE) in which the local charcteristics of the CA are taken into account. The proposed algorithm stimates the regions with apparent CA artifacts and the ratios of the agnitudes between the color channels. Using this information, the roposed PDE matches the gradients of the edges in the red and lue channels to the gradient in the green channel, which results in n alignment of the positions of the edges while simultaneously perorming a deblurring process on the edges. Experimental results how that the proposed method can effectively remove even signifiant CA artifacts, such as purple fringing as identified by the image ensor. The experimental results show that the proposed algorithm chieves better performance than existing algorithms. © 2010 SPIE nd IS&T. DOI: 10.1117/1.3494278", "title": "" }, { "docid": "0dad686449811de611e9c55dbc9fc255", "text": "Neural networks with tree-based sentence encoders have shown better results on many downstream tasks. Most of existing tree-based encoders adopt syntactic parsing trees as the explicit structure prior. To study the effectiveness of different tree structures, we replace the parsing trees with trivial trees (i.e., binary balanced tree, left-branching tree and right-branching tree) in the encoders. Though trivial trees contain no syntactic information, those encoders get competitive or even better results on all of the ten downstream tasks we investigated. This surprising result indicates that explicit syntax guidance may not be the main contributor to the superior performances of tree-based neural sentence modeling. Further analysis show that tree modeling gives better results when crucial words are closer to the final representation. Additional experiments give more clues on how to design an effective tree-based encoder. Our code is opensource and available at https://github. com/ExplorerFreda/TreeEnc.", "title": "" }, { "docid": "bb253cee8f3b8de7c90e09ef878434f3", "text": "Under most widely-used security mechanisms the programs users run possess more authority than is strictly necessary, with each process typically capable of utilising all of the user’s privileges. Consequently such security mechanisms often fail to protect against contemporary threats, such as previously unknown (‘zero-day’) malware and software vulnerabilities, as processes can misuse a user’s privileges to behave maliciously. Application restrictions and sandboxes can mitigate threats that traditional approaches to access control fail to prevent by limiting the authority granted to each process. This developing field has become an active area of research, and a variety of solutions have been proposed. However, despite the seriousness of the problem and the security advantages these schemes provide, practical obstacles have restricted their adoption. This paper describes the motivation for application restrictions and sandboxes, presenting an indepth review of the literature covering existing systems. This is the most comprehensive review of the field to date. The paper outlines the broad categories of existing application-oriented access control schemes, such as isolation and rule-based schemes, and discusses their limitations. Adoption of these schemes has arguably been impeded by workflow, policy complexity, and usability issues. The paper concludes with a discussion on areas for future work, and points a way forward within this developing field of research with recommendations for usability and abstraction to be considered to a further extent when designing application-oriented access", "title": "" }, { "docid": "8a28f3ad78a77922fd500b805139de4b", "text": "Sina Weibo is the most popular and fast growing microblogging social network in China. However, more and more spam messages are also emerging on Sina Weibo. How to detect these spam is essential for the social network security. While most previous studies attempt to detect the microblogging spam by identifying spammers, in this paper, we want to exam whether we can detect the spam by each single Weibo message, because we notice that more and more spam Weibos are posted by normal users or even popular verified users. We propose a Weibo spam detection method based on machine learning algorithm. In addition, different from most existing microblogging spam detection methods which are based on English microblogs, our method is designed to deal with the features of Chinese microblogs. Our extensive empirical study shows the effectiveness of our approach.", "title": "" }, { "docid": "716f8cadac94110c4a00bc81480a4b66", "text": "The last decade has witnessed the prevalence of sensor and GPS technologies that produce a sheer volume of trajectory data representing the motion history of moving objects. Measuring similarity between trajectories is undoubtedly one of the most important tasks in trajectory data management since it serves as the foundation of many advanced analyses such as similarity search, clustering, and classification. In this light, tremendous efforts have been spent on this topic, which results in a large number of trajectory similarity measures. Generally, each individual work introducing a new distance measure has made specific claims on the superiority of their proposal. However, for most works, the experimental study was focused on demonstrating the efficiency of the search algorithms, leaving the effectiveness aspect unverified empirically. In this paper, we conduct a comparative experimental study on the effectiveness of six widely used trajectory similarity measures based on a real taxi trajectory dataset. By applying a variety of transformations we designed for each original trajectory, our experimental observations demonstrate the advantages and drawbacks of these similarity measures in different circumstances.", "title": "" }, { "docid": "1e06f7e6b7b0d3f9a21a814e50af6e3c", "text": "The context-dependent nature of online aggression makes annotating large collections of data extremely difficult. Previously studied datasets in abusive language detection have been insufficient in size to efficiently train deep learning models. Recently, Hate and Abusive Speech on Twitter, a dataset much greater in size and reliability, has been released. However, this dataset has not been comprehensively studied to its potential. In this paper, we conduct the first comparative study of various learning models on Hate and Abusive Speech on Twitter, and discuss the possibility of using additional features and context data for improvements. Experimental results show that bidirectional GRU networks trained on word-level features, with Latent Topic Clustering modules, is the most accurate model scoring 0.805 F1.", "title": "" }, { "docid": "5bd713c468f48313e42b399f441bb709", "text": "Nowadays, malware is affecting not only PCs but also mobile devices, which became pervasive in everyday life. Mobile devices can access and store personal information (e.g., location, photos, and messages) and thus are appealing to malware authors. One of the most promising approach to analyze malware is by monitoring its execution in a sandbox (i.e., via dynamic analysis). In particular, most malware sandboxing solutions for Android rely on an emulator, rather than a real device. This motivates malware authors to include runtime checks in order to detect whether the malware is running in a virtualized environment. In that case, the malicious app does not trigger the malicious payload. The presence of differences between real devices and Android emulators started an arms race between security researchers and malware authors, where the former want to hide these differences and the latter try to seek them out. In this paper we present Mirage, a malware sandbox architecture for Android focused on dynamic analysis evasion attacks. We designed the components of Mirage to be extensible via software modules, in order to build specific countermeasures against such attacks. To the best of our knowledge, Mirage is the first modular sandbox architecture that is robust against sandbox detection techniques. As a representative case study, we present a proof of concept implementation of Mirage with a module that tackles evasion attacks based on sensors API return values.", "title": "" }, { "docid": "4f2ebb2640a36651fd8c01f3eeb0e13e", "text": "This paper addresses pixel-level segmentation of a human body from a single image. The problem is formulated as a multi-region segmentation where the human body is constrained to be a collection of geometrically linked regions and the background is split into a small number of distinct zones. We solve this problem in a Bayesian framework for jointly estimating articulated body pose and the pixel-level segmentation of each body part. Using an image likelihood function that simultaneously generates and evaluates the image segmentation corresponding to a given pose, we robustly explore the posterior body shape distribution using a data-driven, coarse-to-fine Metropolis Hastings sampling scheme that includes a strongly data-driven proposal term.", "title": "" }, { "docid": "9ba51fcf04fe9dff5bf368a55fa2a1aa", "text": "In social media, demographic inference is a critical task in order to gain a better understanding of a cohort and to facilitate interacting with one’s audience. Most previous work has made independence assumptions over topological, textual and label information on social networks. In this work, we employ recursive neural networks to break down these independence assumptions to obtain inference about demographic characteristics on Twitter. We show that our model performs better than existing models including the state-of-theart.", "title": "" }, { "docid": "1d964bb1b82e6de71a6407967a8d9fa0", "text": "Ensuring reliable access to clean and affordable water is one of the greatest global challenges of this century. As the world's population increases, water pollution becomes more complex and difficult to remove, and global climate change threatens to exacerbate water scarcity in many areas, the magnitude of this challenge is rapidly increasing. Wastewater reuse is becoming a common necessity, even as a source of potable water, but our separate wastewater collection and water supply systems are not designed to accommodate this pressing need. Furthermore, the aging centralized water and wastewater infrastructure in the developed world faces growing demands to produce higher quality water using less energy and with lower treatment costs. In addition, it is impractical to establish such massive systems in developing regions that currently lack water and wastewater infrastructure. These challenges underscore the need for technological innovation to transform the way we treat, distribute, use, and reuse water toward a distributed, differential water treatment and reuse paradigm (i.e., treat water and wastewater locally only to the required level dictated by the intended use). Nanotechnology offers opportunities to develop next-generation water supply systems. This Account reviews promising nanotechnology-enabled water treatment processes and provides a broad view on how they could transform our water supply and wastewater treatment systems. The extraordinary properties of nanomaterials, such as high surface area, photosensitivity, catalytic and antimicrobial activity, electrochemical, optical, and magnetic properties, and tunable pore size and surface chemistry, provide useful features for many applications. These applications include sensors for water quality monitoring, specialty adsorbents, solar disinfection/decontamination, and high performance membranes. More importantly, the modular, multifunctional and high-efficiency processes enabled by nanotechnology provide a promising route both to retrofit aging infrastructure and to develop high performance, low maintenance decentralized treatment systems including point-of-use devices. Broad implementation of nanotechnology in water treatment will require overcoming the relatively high costs of nanomaterials by enabling their reuse and mitigating risks to public and environmental health by minimizing potential exposure to nanoparticles and promoting their safer design. The development of nanotechnology must go hand in hand with environmental health and safety research to alleviate unintended consequences and contribute toward sustainable water management.", "title": "" }, { "docid": "49a778b673ea65340e2bc2ebce8472a2", "text": "Motorcycles have always been the primary mode of transport in developing countries. In recent years, there has been a rise in motorcycle accidents. One of the major reasons for fatalities in accidents is the motorcyclist not wearing a protective helmet. The most prevalent method for ensuring that motorcyclists wear helmet is traffic police manually monitoring motorcyclists at road junctions or through CCTV footage and penalizing those without helmet. But, it requires human intervention and efforts. This paper proposes an automated system for detecting motorcyclists not wearing helmet and retrieving their motorcycle number plates from CCTV footage video. The proposed system first does background subtraction from video to get moving objects. Then, moving objects are classified as motorcyclist or non-motorcyclist. For classified motorcyclist, head portion is located and it is classified as helmet or non-helmet. Finally, for identified motorcyclist without helmet, number plate of motorcycle is detected and the characters on it are extracted. The proposed system uses Convolutional Neural Networks trained using transfer learning on top of pre-trained model for classification which has helped in achieving greater accuracy. Experimental results on traffic videos show an accuracy of 98.72% on detection of motorcyclists without helmet.", "title": "" }, { "docid": "8d49e37ab80dae285dbf694ba1849f68", "text": "In this paper we present a reference architecture for ETL stages of EDM and LA that works with different data formats and different extraction sites, ensuring privacy and making easier for new participants to enter into the process without demanding more computing power. Considering scenarios with a multitude of virtual environments hosting educational activities, accessible through a common infrastructure, we devised a reference model where data generated from interaction between users and among users and the environment itself, are selected, organized and stored in local “baskets”. Local baskets are then collected and grouped in a global basket. Organization resources like item modeling are used in both levels of basket construction. Using this reference upon a client-server architectural style, a reference architecture was developed and has been used to carry out a project for an official foundation linked to Brazilian Ministry of Education, involving educational data mining and sharing of 100+ higher education institutions and their respective virtual environments. In this architecture, a client-collector inside each virtual environment collects information from database and event logs. This information along with definitions obtained from item models are used to build local baskets. A synchronization protocol keeps all item models synced with client-collectors and server-collectors generating global baskets. This approach has shown improvements on ETL like: parallel processing of items, economy on storage space and bandwidth, privacy assurance, better tenacity, and good scalability.", "title": "" }, { "docid": "42d3f666325c3c9e2d61fcbad3c6659a", "text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.", "title": "" } ]
scidocsrr
474f8eb990c2361d81421943fa55ff87
Angewandte Mathematik Und Informatik Universit at Zu K Oln Level Planar Embedding in Linear Time
[ { "docid": "0c81db10ea2268b640073e3aaa49cb35", "text": "A data structure called a PQ-tree is introduced. PQ-trees can be used to represent the permutations of a set U in which various subsets of U occur consecutively. Efficient algorithms are presented for manipulating PQ-trees. Algorithms using PQ-trecs are then given which test for the consecutive ones property in matrices and for graph planarity. The consecutive ones test is extended to a test for interval graphs using a recently discovered fast recognition algorithm for chordal graphs. All of these algorithms require a number of steps linear in the size of their input.", "title": "" } ]
[ { "docid": "202439978e4bece800aa42b1fea99d7b", "text": "Although they are primitive vertebrates, zebrafish (Danio rerio) and medaka (Oryzias latipes) have surpassed other animals as the most used model organisms based on their many advantages. Studies on gene expression patterns, regulatory cis-elements identification, and gene functions can be facilitated by using zebrafish embryos via a number of techniques, including transgenesis, in vivo transient assay, overexpression by injection of mRNAs, knockdown by injection of morpholino oligonucleotides, knockout and gene editing by CRISPR/Cas9 system and mutagenesis. In addition, transgenic lines of model fish harboring a tissue-specific reporter have become a powerful tool for the study of biological sciences, since it is possible to visualize the dynamic expression of a specific gene in the transparent embryos. In particular, some transgenic fish lines and mutants display defective phenotypes similar to those of human diseases. Therefore, a wide variety of fish model not only sheds light on the molecular mechanisms underlying disease pathogenesis in vivo but also provides a living platform for high-throughput screening of drug candidates. Interestingly, transgenic model fish lines can also be applied as biosensors to detect environmental pollutants, and even as pet fish to display beautiful fluorescent colors. Therefore, transgenic model fish possess a broad spectrum of applications in modern biomedical research, as exampled in the following review.", "title": "" }, { "docid": "f4b92c53dc001d06489093ff302384b2", "text": "Computational topology has recently known an important development toward data analysis, giving birth to the field of topological data analysis. Topological persistence, or persistent homology, appears as a fundamental tool in this field. In this paper, we study topological persistence in general metric spaces, with a statistical approach. We show that the use of persistent homology can be naturally considered in general statistical frameworks and persistence diagrams can be used as statistics with interesting convergence properties. Some numerical experiments are performed in various contexts to illustrate our results.", "title": "" }, { "docid": "d157d7b6e1c5796b6d7e8fedf66e81d8", "text": "Intrusion detection for computer network systems becomes one of the most critical tasks for network administrators today. It has an important role for organizations, governments and our society due to its valuable resources on computer networks. Traditional misuse detection strategies are unable to detect new and unknown intrusion. Besides , anomaly detection in network security is aim to distinguish between illegal or malicious events and normal behavior of network systems. Anomaly detection can be considered as a classification problem where it builds models of normal network behavior, which it uses to detect new patterns that significantly deviate from the model. Most of the current research on anomaly detection is based on the learning of normally and anomaly behaviors. They do not take into account the previous, recent events to detect the new incoming one. In this paper, we propose a real time collective anomaly detection model based on neural network learning and feature operating. Normally a Long Short-Term Memory Recurrent Neural Network (LSTM RNN) is trained only on normal data and it is capable of predicting several time steps ahead of an input. In our approach, a LSTM RNN is trained with normal time series data before performing a live prediction for each time step. Instead of considering each time step separately, the observation of prediction errors from a certain number of time steps is now proposed as a new idea for detecting collective anomalies. The prediction errors from a number of the latest time steps above a threshold will indicate a collective anomaly. The model is built on a time series version of the KDD 1999 dataset. The experiments demonstrate that it is possible to offer reliable and efficient for collective anomaly detection.", "title": "" }, { "docid": "3ccc5fd5bbf570a361b40afca37cec92", "text": "Face detection techniques have been developed for decades, and one of remaining open challenges is detecting small faces in unconstrained conditions. The reason is that tiny faces are often lacking detailed information and blurring. In this paper, we proposed an algorithm to directly generate a clear high-resolution face from a blurry small one by adopting a generative adversarial network (GAN). Toward this end, the basic GAN formulation achieves it by super-resolving and refining sequentially (e.g. SR-GAN and cycle-GAN). However, we design a novel network to address the problem of super-resolving and refining jointly. We also introduce new training losses to guide the generator network to recover fine details and to promote the discriminator network to distinguish real vs. fake and face vs. non-face simultaneously. Extensive experiments on the challenging dataset WIDER FACE demonstrate the effectiveness of our proposed method in restoring a clear high-resolution face from a blurry small one, and show that the detection performance outperforms other state-of-the-art methods.", "title": "" }, { "docid": "af5aaf2d834eec9bf5e47a89be6a30d8", "text": "An often-cited advantage of automatic speech recognition (ASR) is that it is ‘fast’; it is quite easy for a person to speak at several hundred words a minute, well above the rates that are possible using other modes of data entry. However, in order to conduct a fair comparison between alternative data entry methods, it is necessary to consider not the input rate per se, but the rate at which it is possible to enter information that is fully correct. This paper describes a model for predicting the relative success of alternative method of data entry in terms of the effective ‘throughput’ that is achievable taking into account typical input data entry rates, error rates and error correction times. Results are presented for the entry of both conventional and SMS-style text.", "title": "" }, { "docid": "ada320bb2747d539ff6322bbd46bd9f0", "text": "Real job applicants completed a 5-factor model personality measure as part of the job application process. They were rejected; 6 months later they (n = 5,266) reapplied for the same job and completed the same personality measure. Results indicated that 5.2% or fewer improved their scores on any scale on the 2nd occasion; moreover, scale scores were as likely to change in the negative direction as the positive. Only 3 applicants changed scores on all 5 scales beyond a 95% confidence threshold. Construct validity of the personality scales remained intact across the 2 administrations, and the same structural model provided an acceptable fit to the scale score matrix on both occasions. For the small number of applicants whose scores changed beyond the standard error of measurement, the authors found the changes were systematic and predictable using measures of social skill, social desirability, and integrity. Results suggest that faking on personality measures is not a significant problem in real-world selection settings.", "title": "" }, { "docid": "ff71838a3f8f44e30dc69ed2f9371bfc", "text": "The idea that video games or computer-based applications can improve cognitive function has led to a proliferation of programs claiming to \"train the brain.\" However, there is often little scientific basis in the development of commercial training programs, and many research-based programs yield inconsistent or weak results. In this study, we sought to better understand the nature of cognitive abilities tapped by casual video games and thus reflect on their potential as a training tool. A moderately large sample of participants (n=209) played 20 web-based casual games and performed a battery of cognitive tasks. We used cognitive task analysis and multivariate statistical techniques to characterize the relationships between performance metrics. We validated the cognitive abilities measured in the task battery, examined a task analysis-based categorization of the casual games, and then characterized the relationship between game and task performance. We found that games categorized to tap working memory and reasoning were robustly related to performance on working memory and fluid intelligence tasks, with fluid intelligence best predicting scores on working memory and reasoning games. We discuss these results in the context of overlap in cognitive processes engaged by the cognitive tasks and casual games, and within the context of assessing near and far transfer. While this is not a training study, these findings provide a methodology to assess the validity of using certain games as training and assessment devices for specific cognitive abilities, and shed light on the mixed transfer results in the computer-based training literature. Moreover, the results can inform design of a more theoretically-driven and methodologically-sound cognitive training program.", "title": "" }, { "docid": "772b550b1193ee9627cd458c1bac52a6", "text": "We will describe recent developments in a system for machine learning that we’ve been working on for some time (Sol 86, Sol 89). It is meant to be a “Scientist’s Assistant” of great power and versatility in many areas of science and mathematics. It differs from other ambitious work in this area in that we are not so much interested in knowledge itself, as we are in how it is acquired how machines may learn. To start off, the system will learn to solve two very general kinds of problems. Most, but perhaps not all problems in science and engineering are of these two kinds. The first kind is Function Inversion. These are the P and NP problems of computational complexity theory. They include theorem proving, solution of equations, symbolic integration, etc. The second kind of problem is Time Limited Optimization. Inductive inference of all kinds, surface reconstruction, and image restoration are a few examples of this kind of problem. Designing an automobile in 6 months satisfying certain specifications and having minimal cost, is", "title": "" }, { "docid": "6b5a7e58a8407fa5cda402d4996a3a10", "text": "In the last few years, Hadoop become a \"de facto\" standard to process large scale data as an open source distributed system. With combination of data mining techniques, Hadoop improve data analysis utility. That is why, there are amount of research is studied to apply data mining technique to mapreduce framework in Hadoop. However, data mining have a possibility to cause a privacy violation and this threat is a huge obstacle for data mining using Hadoop. To solve this problem, numerous studies have been conducted. However, existing studies were insufficient and had several drawbacks. In this paper, we propose the privacy preserving data mining technique in Hadoop that is solve privacy violation without utility degradation. We focus on association rule mining algorithm that is representative data mining algorithm. We validate the proposed technique to satisfy performance and preserve data privacy through the experimental results.", "title": "" }, { "docid": "28c82ece7caa6e07bf31a143c2d3adbd", "text": "We develop a novel method for training of GANs for unsupervised and class conditional generation of images, called Linear Discriminant GAN (LD-GAN). The discriminator of an LD-GAN is trained to maximize the linear separability between distributions of hidden representations of generated and targeted samples, while the generator is updated based on the decision hyper-planes computed by performing LDA over the hidden representations. LD-GAN provides a concrete metric of separation capacity for the discriminator, and we experimentally show that it is possible to stabilize the training of LD-GAN simply by calibrating the update frequencies between generators and discriminators in the unsupervised case, without employment of normalization methods and constraints on weights. In the class conditional generation tasks, the proposed method shows improved training stability together with better generalization performance compared to WGAN (Arjovsky et al. 2017) that employs an auxiliary classifier.", "title": "" }, { "docid": "a88266320346fd1f518d7e3bdc14a6d6", "text": "Machine learning (ML) is now a fairly established technology, and user experience (UX) designers appear regularly to integrate ML services in new apps, devices, and systems. Interestingly, this technology has not experienced a wealth of design innovation that other technologies have, and this might be because it is a new and difficult design material. To better understand why we have witnessed little design innovation, we conducted a survey of current UX practitioners with regards to how new ML services are envisioned and developed in UX practice. Our survey probed on how ML may or may not have been a part of their UX design education, on how they work to create new things with developers, and on the challenges they have faced working with this material. We use the findings from this survey and our review of related literature to present a series of challenges for UX and interaction design research and education. Finally, we discuss areas where new research and new curriculum might help our community unlock the power of design thinking to re-imagine what ML might be and might do.", "title": "" }, { "docid": "ad8825642d101f9e43522066355467c7", "text": "Inverse Reinforcement Learning (IRL) describes the problem of learning an unknown reward function of a Markov Decision Process (MDP) from observed behavior of an agent. Since the agent’s behavior originates in its policy and MDP policies depend on both the stochastic system dynamics as well as the reward function, the solution of the inverse problem is significantly influenced by both. Current IRL approaches assume that if the transition model is unknown, additional samples from the system’s dynamics are accessible, or the observed behavior provides enough samples of the system’s dynamics to solve the inverse problem accurately. These assumptions are often not satisfied. To overcome this, we present a gradient-based IRL approach that simultaneously estimates the system’s dynamics. By solving the combined optimization problem, our approach takes into account the bias of the demonstrations, which stems from the generating policy. The evaluation on a synthetic MDP and a transfer learning task shows improvements regarding the sample efficiency as well as the accuracy of the estimated reward functions and transition models.", "title": "" }, { "docid": "d4ac52a52e780184359289ecb41e321e", "text": "Interleaving is an increasingly popular technique for evaluating information retrieval systems based on implicit user feedback. While a number of isolated studies have analyzed how this technique agrees with conventional offline evaluation approaches and other online techniques, a complete picture of its efficiency and effectiveness is still lacking. In this paper we extend and combine the body of empirical evidence regarding interleaving, and provide a comprehensive analysis of interleaving using data from two major commercial search engines and a retrieval system for scientific literature. In particular, we analyze the agreement of interleaving with manual relevance judgments and observational implicit feedback measures, estimate the statistical efficiency of interleaving, and explore the relative performance of different interleaving variants. We also show how to learn improved credit-assignment functions for clicks that further increase the sensitivity of interleaving.", "title": "" }, { "docid": "c938996e79711cae64bdcc23d7e3944b", "text": "Decreased antimicrobial efficiency has become a global public health issue. The paucity of new antibacterial drugs is evident, and the arsenal against infectious diseases needs to be improved urgently. The selection of plants as a source of prototype compounds is appropriate, since plant species naturally produce a wide range of secondary metabolites that act as a chemical line of defense against microorganisms in the environment. Although traditional approaches to combat microbial infections remain effective, targeting microbial virulence rather than survival seems to be an exciting strategy, since the modulation of virulence factors might lead to a milder evolutionary pressure for the development of resistance. Additionally, anti-infective chemotherapies may be successfully achieved by combining antivirulence and conventional antimicrobials, extending the lifespan of these drugs. This review presents an updated discussion of natural compounds isolated from plants with chemically characterized structures and activity against the major bacterial virulence factors: quorum sensing, bacterial biofilms, bacterial motility, bacterial toxins, bacterial pigments, bacterial enzymes, and bacterial surfactants. Moreover, a critical analysis of the most promising virulence factors is presented, highlighting their potential as targets to attenuate bacterial virulence. The ongoing progress in the field of antivirulence therapy may therefore help to translate this promising concept into real intervention strategies in clinical areas.", "title": "" }, { "docid": "33d65d9ae8575d9de3b6a7cf0c30db37", "text": "The prediction of collisions amongst N rigid objects may be reduced to a series of computations of the time to first contact for all pairs of objects. Simple enclosing bounds and hierarchical partitions of the space-time domain are often used to avoid testing object-pairs that clearly will not collide. When the remaining pairs involve only polyhedra under straight-line translation, the exact computation of the collision time and of the contacts requires only solving for intersections between linear geometries. When a pair is subject to a more general relative motion, such a direct collision prediction calculation may be intractable. The popular brute force collision detection strategy of executing the motion for a series of small time steps and of checking for static interferences after each step is often computationally prohibitive. We propose instead a less expensive collision prediction strategy, where we approximate the relative motion between pairs of objects by a sequence of screw motion segments, each defined by the relative position and orientation of the two objects at the beginning and at the end of the segment. We reduce the computation of the exact collision time and of the corresponding face/vertex and edge/edge collision points to the numeric extraction of the roots of simple univariate analytic functions. Furthermore, we propose a series of simple rejection tests, which exploit the particularity of the screw motion to immediately decide that some objects do not collide or to speed-up the prediction of collisions by about 30%, avoiding on average 3/4 of the root-finding queries even when the object actually collide.", "title": "" }, { "docid": "c53d4c50930078ac4f49e4bca7ff7485", "text": "A versatile 4-channel bipotentiostat system for biochemical sensing is presented. A 1pA current resolution and 8kHz bandwidth are suited for amperometric detection of neurotransmitters released by cells, monitored in a smart microfluidic culture chamber. Multiple electrochemical measurements can be carried out on arrays of microelectrodes. Key design issues are here discussed along with the results of extensive electrochemical experiments (cyclic voltammetry, chronoamperometry, redox recycling and potentiometry).", "title": "" }, { "docid": "9d7852606784ecb8501d5b26b1b98f7f", "text": "This work describes a visualization tool and sensor testbed that can be used for assessing the performance of both instruments and human observers in support of port and harbor security. Simulation and modeling of littoral environments must take into account the complex interplay of incident light distributions, spatially correlated boundary interfaces, bottom-type variation, and the three-dimensional structure of objects in and out of the water. A general methodology for a two-pass Monte Carlo solution called Photon Mapping has been adopted and developed in the context of littoral hydrologic optics. The resulting tool is an end-to-end technique for simulating spectral radiative transfer in natural waters. A modular design allows arbitrary distributions of optical properties, geometries, and incident radiance to be modeled effectively. This tool has been integrated as part of the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. DIRSIG has an established history in multi and hyperspectral scene simulation of terrain targets ranging from the visible to the thermal infrared (0.380 20.0 microns). This tool extends its capabilities to the domain of hydrologic optics and can be used to simulate and develop active/passive sensors that could be deployed on either aerial or underwater platforms. Applications of this model as a visualization tool for underwater sensors or divers are also demonstrated.", "title": "" }, { "docid": "c4f30733a0a27f5b6a5e64ffdbcc60fa", "text": "The RLK/Pelle gene family is one of the largest gene families in plants with several hundred to more than a thousand members, but only a few family members exist in animals. This unbalanced distribution indicates a rather dramatic expansion of this gene family in land plants. In this chapter we review what is known about the RLK/Pelle family’s origin in eukaryotes, its domain content evolution, expansion patterns across plant and animal species, and the duplication mechanisms that contribute to its expansion. We conclude by summarizing current knowledge of plant RLK/Pelle functions for a discussion on the relative importance of neutral evolution and natural selection as the driving forces behind continuous expansion and innovation in this gene family.", "title": "" }, { "docid": "d2b06786b6daa023dfd9f58ac99e8186", "text": "A systematic method for deriving soft-switching three-port converters (TPCs), which can interface multiple energy, is proposed in this paper. Novel full-bridge (FB) TPCs featuring single-stage power conversion, reduced conduction loss, and low-voltage stress are derived. Two nonisolated bidirectional power ports and one isolated unidirectional load port are provided by integrating an interleaved bidirectional Buck/Boost converter and a bridgeless Boost rectifier via a high-frequency transformer. The switching bridges on the primary side are shared; hence, the number of active switches is reduced. Primary-side pulse width modulation and secondary-side phase shift control strategy are employed to provide two control freedoms. Voltage and power regulations over two of the three power ports are achieved. Furthermore, the current/voltage ripples on the primary-side power ports are reduced due to the interleaving operation. Zero-voltage switching and zero-current switching are realized for the active switches and diodes, respectively. A typical FB-TPC with voltage-doubler rectifier developed by the proposed method is analyzed in detail. Operation principles, control strategy, and characteristics of the FB-TPC are presented. Experiments have been carried out to demonstrate the feasibility and effectiveness of the proposed topology derivation method.", "title": "" }, { "docid": "0e74994211d0e3c1e85ba0c85aba3df5", "text": "Images of faces manipulated to make their shapes closer to the average are perceived as more attractive. The influences of symmetry and averageness are often confounded in studies based on full-face views of faces. Two experiments are reported that compared the effect of manipulating the averageness of female faces in profile and full-face views. Use of a profile view allows a face to be \"morphed\" toward an average shape without creating an image that becomes more symmetrical. Faces morphed toward the average were perceived as more attractive in both views, but the effect was significantly stronger for full-face views. Both full-face and profile views morphed away from the average shape were perceived as less attractive. It is concluded that the effect of averageness is independent of any effect of symmetry on the perceived attractiveness of female faces.", "title": "" } ]
scidocsrr
8660ab87ee327c21c41fe597b20ef4de
An Artificial Intelligence Approach to Financial Fraud Detection under IoT Environment: A Survey and Implementation
[ { "docid": "007706ad8c73376db70af36a66cedf14", "text": "— With the developments in the Information Technology and improvements in the communication channels, fraud is spreading all over the world, resulting in huge financial losses. Though fraud prevention mechanisms such as CHIP&PIN are developed, these mechanisms do not prevent the most common fraud types such as fraudulent credit card usages over virtual POS terminals or mail orders. As a result, fraud detection is the essential tool and probably the best way to stop such fraud types. In this study, classification models based on decision trees and support vector machines (SVM) are developed and applied on credit card fraud detection problem. This study is one of the firsts to compare the performance of SVM and decision tree methods in credit card fraud detection with a real data set.", "title": "" }, { "docid": "e43c27b652de5c015450f542c1eb8dd2", "text": "Financial fraud is increasing significantly with the development of modern technology and the global superhighways of communication, resulting in the loss of billions of dollars worldwide each year. The companies and financial institution loose huge amounts due to fraud and fraudsters continuously try to find new rules and tactics to commit illegal actions. Thus, fraud detection systems have become essential for all credit card issuing banks to minimize their losses. The most commonly used fraud detection methods are Neural Network (NN), rule-induction techniques, fuzzy system, decision trees, Support Vector Machines (SVM), Artificial Immune System (AIS), genetic algorithms, K-Nearest Neighbor algorithms. These techniques can be used alone or in collaboration using ensemble or meta-learning techniques to build classifiers. This paper presents a survey of various techniques used in credit card fraud detection and evaluates each methodology based on certain design criteria. And this survey enables us to build a hybrid approach for developing some effective algorithms which can perform well for the classification problem with variable misclassification costs and with higher accuracy.", "title": "" }, { "docid": "f36348f2909a9642c18590fca6c9b046", "text": "This study explores the use of data mining methods to detect fraud for on e-ledgers through financial statements. For this purpose, data set were produced by rule-based control application using 72 sample e-ledger and error percentages were calculated and labeled. The financial statements created from the labeled e-ledgers were trained by different data mining methods on 9 distinguishing features. In the training process, Linear Regression, Artificial Neural Networks, K-Nearest Neighbor algorithm, Support Vector Machine, Decision Stump, M5P Tree, J48 Tree, Random Forest and Decision Table were used. The results obtained are compared and interpreted.", "title": "" }, { "docid": "66248db37a0dcf8cb17c075108b513b4", "text": "Since past few years there is tremendous advancement in electronic commerce technology, and the use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper we present the necessary theory to detect fraud in credit card transaction processing using a Hidden Markov Model (HMM). An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected by using an enhancement to it(Hybrid model).In further sections we compare different methods for fraud detection and prove that why HMM is more preferred method than other methods.", "title": "" }, { "docid": "5523695d47205129d0e5f6916d2d14f1", "text": "A phenomenal growth in the number of credit card transactions, especially for online purchases, has recently led to a substantial rise in fraudulent activities. Implementation of efficient fraud detection systems has thus become imperative for all credit card issuing banks to minimize their losses. In real life, fraudulent transactions are interspersed with genuine transactions and simple pattern matching is not often sufficient to detect them accurately. Thus, there is a need for combining both anomaly detection as well as misuse detection techniques. In this paper, we propose to use two-stage sequence alignment in which a profile analyzer (PA) first determines the similarity of an incoming sequence of transactions on a given credit card with the genuine cardholder's past spending sequences. The unusual transactions traced by the profile analyzer are next passed on to a deviation analyzer (DA) for possible alignment with past fraudulent behavior. The final decision about the nature of a transaction is taken on the basis of the observations by these two analyzers. In order to achieve online response time for both PA and DA, we suggest a new approach for combining two sequence alignment algorithms BLAST and SSAHA.", "title": "" } ]
[ { "docid": "74235290789c24ce00d54541189a4617", "text": "This article deals with an interesting application of Fractional Order (FO) Proportional Integral Derivative (PID) Controller for speed regulation in a DC Motor Drive. The design of five interdependent Fractional Order controller parameters has been formulated as an optimization problem based on minimization of set point error and controller output. The task of optimization was carried out using Artificial Bee Colony (ABC) algorithm. A comparative study has also been made to highlight the advantage of using a Fractional order PID controller over conventional PID control scheme for speed regulation of application considered. Extensive simulation results are provided to validate the effectiveness of the proposed approach.", "title": "" }, { "docid": "06ef397d13383ff09f2f6741c0626192", "text": "A fully-integrated low-dropout regulator (LDO) with fast transient response and full spectrum power supply rejection (PSR) is proposed to provide a clean supply for noise-sensitive building blocks in wideband communication systems. With the proposed point-of-load LDO, chip-level high-frequency glitches are well attenuated, consequently the system performance is improved. A tri-loop LDO architecture is proposed and verified in a 65 nm CMOS process. In comparison to other fully-integrated designs, the output pole is set to be the dominant pole, and the internal poles are pushed to higher frequencies with only 50 μA of total quiescent current. For a 1.2 V input voltage and 1 V output voltage, the measured undershoot and overshoot is only 43 mV and 82 mV, respectively, for load transient of 0 μA to 10 mA within edge times of 200 ps. It achieves a transient response time of 1.15 ns and the figure-of-merit (FOM) of 5.74 ps. PSR is measured to be better than -12 dB over the whole spectrum (DC to 20 GHz tested). The prototype chip measures 260×90 μm2, including 140 pF of stacked on-chip capacitors.", "title": "" }, { "docid": "4d0921d8dd1004f0eed02df0ff95a092", "text": "The “open classroom” emerged as a reaction against the industrial-era enclosed and authoritarian classroom. Although contemporary school architecture continues to incorporate and express ideas of openness, more research is needed about how teachers adapt to new and different built contexts. Our purpose is to identify teacher reaction to the affordances of open space learning environments. We outline a case study of teacher perceptions of working in new open plan school buildings. The case study demonstrates that affordances of open space classrooms include flexibility, visibility and scrutiny, and a de-emphasis of authority; teacher reactions included collective practice, team orientation, and increased interactions and a democratisation of authority. We argue that teacher reaction to the new open classroom features adaptability, intensification of day-to-day practice, and intraand inter-personal knowledge and skills.", "title": "" }, { "docid": "33b8417f25b56e5ea9944f9f33fc162c", "text": "Researchers have attempted to model information diffusion and topic trends and lifecycle on online social networks. They have investigated the role of content, social connections and communities, familiarity and behavioral similarity in this context. The current article presents a survey of representative models that perform topic analysis, capture information diffusion, and explore the properties of social connections in the context of online social networks. The article concludes with a set of outlines of open problems and possible directions of future research interest. This article is intended for researchers to identify the current literature, and explore possibilities to improve the art.", "title": "" }, { "docid": "1eee94436ff7c65b18908dab7fbfb1c6", "text": "Many efforts have been made in recent years to tackle the unconstrained face recognition challenge. For the benchmark of this challenge, the Labeled Faces in theWild (LFW) database has been widely used. However, the standard LFW protocol is very limited, with only 3,000 genuine and 3,000 impostor matches for classification. Today a 97% accuracy can be achieved with this benchmark, remaining a very limited room for algorithm development. However, we argue that this accuracy may be too optimistic because the underlying false accept rate may still be high (e.g. 3%). Furthermore, performance evaluation at low FARs is not statistically sound by the standard protocol due to the limited number of impostor matches. Thereby we develop a new benchmark protocol to fully exploit all the 13,233 LFW face images for large-scale unconstrained face recognition evaluation under both verification and open-set identification scenarios, with a focus at low FARs. Based on the new benchmark, we evaluate 21 face recognition approaches by combining 3 kinds of features and 7 learning algorithms. The benchmark results show that the best algorithm achieves 41.66% verification rates at FAR=0.1%, and 18.07% open-set identification rates at rank 1 and FAR=1%. Accordingly we conclude that the large-scale unconstrained face recognition problem is still largely unresolved, thus further attention and effort is needed in developing effective feature representations and learning algorithms. We thereby release a benchmark tool to advance research in this field.", "title": "" }, { "docid": "d735547a7b3a79f5935f15da3e51f361", "text": "We propose a new approach for locating forged regions in a video using correlation of noise residue. In our method, block-level correlation values of noise residual are extracted as a feature for classification. We model the distribution of correlation of temporal noise residue in a forged video as a Gaussian mixture model (GMM). We propose a two-step scheme to estimate the model parameters. Consequently, a Bayesian classifier is used to find the optimal threshold value based on the estimated parameters. Two video inpainting schemes are used to simulate two different types of forgery processes for performance evaluation. Simulation results show that our method achieves promising accuracy in video forgery detection.", "title": "" }, { "docid": "dc810b43c71ab591981454ad20e34b7a", "text": "This paper proposes a real-time variable-Q non-stationary Gabor transform (VQ-NSGT) system for speech pitch shifting. The system allows for time-frequency representations of speech on variable-Q (VQ) with perfect reconstruction and computational efficiency. The proposed VQ-NSGT phase vocoder can be used for pitch shifting by simple frequency translation (transposing partials along the frequency axis) instead of spectral stretching in frequency domain by the Fourier transform. In order to retain natural sounding pitch shifted speech, a hybrid of smoothly varying Q scheme is used to retain the formant structure of the original signal at both low and high frequencies. Moreover, the preservation of transients of speech are improved due to the high time resolution of VQ-NSGT at high frequencies. A sliced VQ-NSGT is used to retain inter-partials phase coherence by synchronized overlap-add method. Therefore, the proposed system lends itself to real-time processing while retaining the formant structure of the original signal and inter-partial phase coherence. The simulation results showed that the proposed approach is suitable for pitch shifting of both speech and music signals.", "title": "" }, { "docid": "ff67540fcba29de05415c77744d3a21d", "text": "Using Youla Parametrization and Linear Matrix Inequalities (LMI) a Multiobjective Robust Control (MRC) design for continuous linear time invariant (LTI) systems with bounded uncertainties is described. The design objectives can be a combination of H∞-, H2-performances, constraints on the control signal, etc.. Based on an initial stabilizing controller all stabilizing controllers for the uncertain system can be described by the Youla parametrization. Given this representation, all objectives can be formulated by independent Lyapunov functions, increasing the degree of freedom for the control design.", "title": "" }, { "docid": "67e2bbbbd0820bb47f04258eb4917cc1", "text": "One of the major differences between markets that follow a \" sharing economy \" paradigm and traditional two-sided markets is that the supply side in the sharing economy often includes individual nonprofessional decision makers, in addition to firms and professional agents. Using a data set of prices and availability of listings on Airbnb, we find that there exist substantial differences in the operational and financial performance of professional and nonprofessional hosts. In particular, properties managed by professional hosts earn 16.9% more in daily revenue, have 15.5% higher occupancy rates, and are 13.6% less likely to exit the market compared with properties owned by nonprofessional hosts, while controlling for property and market characteristics. We demonstrate that these performance differences between professionals and nonprofessionals can be partly explained by pricing inefficiencies. Specifically, we provide empirical evidence that nonprofes-sional hosts are less likely to offer different rates across stay dates based on the underlying demand patterns, such as those created by major holidays and conventions. We develop a parsimonious model to analyze the implications of having two such different host groups for a profit-maximizing platform operator and for a social planner. While a profit-maximizing platform operator should charge lower prices to nonprofessional hosts, a social planner would charge the same prices to professionals and nonprofessionals.", "title": "" }, { "docid": "3250454b6363a9bb49590636d9843a92", "text": "A low precision deep neural network training technique for producing sparse, ternary neural networks is presented. The technique incorporates hardware implementation costs during training to achieve significant model compression for inference. Training involves three stages: network training using L2 regularization and a quantization threshold regularizer, quantization pruning, and finally retraining. Resulting networks achieve improved accuracy, reduced memory footprint and reduced computational complexity compared with conventional methods, on MNIST and CIFAR10 datasets. Our networks are up to 98% sparse and 5 & 11 times smaller than equivalent binary and ternary models, translating to significant resource and speed benefits for hardware implementations.", "title": "" }, { "docid": "87e52d72533c26f59af13aaea0ea4b7f", "text": "This study investigated the work role attachment and retirement intentions of public school teachers in Calabar, Nigeria. It was motivated by the observation that most public school workers lack plans for retirement and as such do not prepare for it until it suddenly dawns on them. Few empirical studies were reviewed. Questionnaire was the main instrument used for data collection from a sample of 200 teachers. Independent t-test was used to test the stated hypotheses at 0.05 level of significance. Results showed that the committed/attached/involved workers have retirement intention to take a part-time job after retirement. The uncommitted/unattached/uninvolved workers have intention to retire earlier than those attached to their work. It was recommended that pre-retirement counselling should be adopted to assist teachers to develop good retirement plans.", "title": "" }, { "docid": "c828195cfc88abd598d1825f69932eb0", "text": "The central purpose of passive signal intercept receivers is to perform automatic categorization of unknown radar signals. Currently, there is an urgent need to develop intelligent classification algorithms for these devices due to emerging complexity of radar waveforms. Especially multifunction radars (MFRs) capable of performing several simultaneous tasks by utilizing complex, dynamically varying scheduled waveforms are a major challenge for automatic pattern classification systems. To assist recognition of complex radar emissions in modern intercept receivers, we have developed a novel method to recognize dynamically varying pulse repetition interval (PRI) modulation patterns emitted by MFRs. We use robust feature extraction and classifier design techniques to assist recognition in unpredictable real-world signal environments. We classify received pulse trains hierarchically which allows unambiguous detection of the subpatterns using a sliding window. Accuracy, robustness and reliability of the technique are demonstrated with extensive simulations using both static and dynamically varying PRI modulation patterns.", "title": "" }, { "docid": "b23d73e29fc205df97f073eb571a2b47", "text": "In this paper, we study two different trajectory planning problems for robotmanipulators. In the first case, the end-effector of the robot is constrained to move along a prescribed path in the workspace, whereas in the second case, the trajectory of the end-effector has to be determined in the presence of obstacles. Constraints of this type are called holonomic constraints. Both problems have been solved as optimal control problems. Given the dynamicmodel of the robotmanipulator, the initial state of the system, some specifications about the final state and a set of holonomic constraints, one has to find the trajectory and the actuator torques that minimize the energy consumption during the motion. The presence of holonomic constraints makes the optimal control problem particularly difficult to solve. Our method involves a numerical resolution of a reformulation of the constrained optimal control problem into an unconstrained calculus of variations problem in which the state space constraints and the dynamic equations, also regarded as constraints, are treated by means of special derivative multipliers. We solve the resulting calculus of variations problem using a numerical approach based on the Euler–Lagrange necessary condition in the integral form in which time is discretized and admissible variations for each variable are approximated using a linear combination of piecewise continuous basis functions of time. The use of the Euler–Lagrange necessary condition in integral form avoids the need for numerical corner conditions and thenecessity of patching together solutions between corners. In thisway, a generalmethod for the solution of constrained optimal control problems is obtained inwhich holonomic constraints can be easily treated. Numerical results of the application of thismethod to trajectory planning of planar horizontal robot manipulators with two revolute joints are reported. © 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0bcec8496b655fffa3591d36fbd5c230", "text": "We propose a novel approach to addressing the adaptation effectiveness issue in parameter adaptation for deep neural network (DNN) based acoustic models for automatic speech recognition by adding one or more small auxiliary output layers modeling broad acoustic units, such as mono-phones or tied-state (often called senone) clusters. In scenarios with a limited amount of available adaptation data, most senones are usually rarely seen or not observed, and consequently the ability to model them in a new condition is often not fully exploited. With the original senone classification task as the primary task, and adding auxiliary mono-phone/senone-cluster classification as the secondary tasks, multi-task learning (MTL) is employed to adapt the DNN parameters. With the proposed MTL adaptation framework, we improve the learning ability of the original DNN structure, then enlarge the coverage of the acoustic space to deal with the unseen senone problem, and thus enhance the discrimination power of the adapted DNN models. Experimental results on the 20,000-word open vocabulary WSJ task demonstrate that the proposed framework consistently outperforms the conventional linear hidden layer adaptation schemes without MTL by providing 3.2% relative word error rate reduction (WERR) with only 1 single adaptation utterance, and 10.7% WERR with 40 adaptation utterances against the un-adapted DNN models.", "title": "" }, { "docid": "2afcc7c1fb9dadc3d46743c991e15bac", "text": "This paper describes the design of a robot head, developed in the framework of the RobotCub project. This project goals consists on the design and construction of a humanoid robotic platform, the iCub, for studying human cognition. The final platform would be approximately 90 cm tall, with 23 kg and with a total number of 53 degrees of freedom. For its size, the iCub is the most complete humanoid robot currently being designed, in terms of kinematic complexity. The eyes can also move, as opposed to similarly sized humanoid platforms. Specifications are made based on biological anatomical and behavioral data, as well as tasks constraints. Different concepts for the neck design (flexible, parallel and serial solutions) are analyzed and compared with respect to the specifications. The eye structure and the proprioceptive sensors are presented, together with some discussion of preliminary work on the face design", "title": "" }, { "docid": "a79c65e76da81044ee7e81fc40fe5f8e", "text": "Most of the equipment required is readily available in most microwave labs: a vector network analyzer, a microwave signal generator, and, of course, a sampling oscilloscope. In this paper, the authors summarize many of the corrections discussed in \" Terminology for high-speed sampling-oscilloscope calibration\" [Williams et al., 2006] and \"Magnitude and phase calibrations for RF, microwave, and high-speed digital signal measurements\" [Remley and Hale, 2007] that are necessary for metrology-grade measurements and Illustrate the application of these oscilloscopes to the characterization of microwave signals.", "title": "" }, { "docid": "25779dfc55dc29428b3939bb37c47d50", "text": "Human daily activity recognition using mobile personal sensing technology plays a central role in the field of pervasive healthcare. One major challenge lies in the inherent complexity of human body movements and the variety of styles when people perform a certain activity. To tackle this problem, in this paper, we present a novel human activity recognition framework based on recently developed compressed sensing and sparse representation theory using wearable inertial sensors. Our approach represents human activity signals as a sparse linear combination of activity signals from all activity classes in the training set. The class membership of the activity signal is determined by solving a l1 minimization problem. We experimentally validate the effectiveness of our sparse representation-based approach by recognizing nine most common human daily activities performed by 14 subjects. Our approach achieves a maximum recognition rate of 96.1%, which beats conventional methods based on nearest neighbor, naive Bayes, and support vector machine by as much as 6.7%. Furthermore, we demonstrate that by using random projection, the task of looking for “optimal features” to achieve the best activity recognition performance is less important within our framework.", "title": "" }, { "docid": "c4aafcc0a98882de931713359e55a04a", "text": "We present a computer vision tool that analyses video from a CCTV system installed on fishing trawlers to monitor discarded fish catch. The system aims to support expert observers who review the footage and verify numbers, species and sizes of discarded fish. The operational environment presents a significant challenge for these tasks. Fish are processed below deck under fluorescent lights, they are randomly oriented and there are multiple occlusions. The scene is unstructured and complicated by the presence of fishermen processing the catch. We describe an approach to segmenting the scene and counting fish that exploits the N4-Fields algorithm. We performed extensive tests of the algorithm on a data set comprising 443 frames from 6 belts. Results indicate the relative count error (for individual fish) ranges from 2% to 16%. We believe this is the first system that is able to handle footage from operational trawlers.", "title": "" }, { "docid": "5546cbb6fac77d2d9fffab8ba0a50ed8", "text": "The next-generation electric power systems (smart grid) are studied intensively as a promising solution for energy crisis. One important feature of the smart grid is the integration of high-speed, reliable and secure data communication networks to manage the complex power systems effectively and intelligently. We provide in this paper a comprehensive survey on the communication architectures in the power systems, including the communication network compositions, technologies, functions, requirements, and research challenges. As these communication networks are responsible for delivering power system related messages, we discuss specifically the network implementation considerations and challenges in the power system settings. This survey attempts to summarize the current state of research efforts in the communication networks of smart grid, which may help us identify the research problems in the continued studies. 2011 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
f461458407838e67950f57dc87fdc98a
Like It or Not: A Survey of Twitter Sentiment Analysis Methods
[ { "docid": "355fca41993ea19b08d2a9fc19e25722", "text": "People and companies selling goods or providing services have always desired to know what people think about their products. The number of opinions on the Web has significantly increased with the emergence of microblogs. In this paper we present a novel method for sentiment analysis of a text that allows the recognition of opinions in microblogs which are connected to a particular target or an entity. This method differs from other approaches in utilizing appraisal theory, which we employ for the analysis of microblog posts. The results of the experiments we performed on Twitter showed that our method improves sentiment classification and is feasible even for such specific content as presented on microblogs.", "title": "" } ]
[ { "docid": "460a296de1bd13378d71ce19ca5d807a", "text": "Many books discuss applications of data mining. For financial data analysis and financial modeling, see Benninga and Czaczkes [BC00] and Higgins [Hig03]. For retail data mining and customer relationship management, see books by Berry and Linoff [BL04] and Berson, Smith, and Thearling [BST99], and the article by Kohavi [Koh01]. For telecommunication-related data mining, see the book by Mattison [Mat97]. Chen, Hsu, and Dayal [CHD00] reported their work on scalable telecommunication tandem traffic analysis under a data warehouse/OLAP framework. For bioinformatics and biological data analysis, there are a large number of introductory references and textbooks. An introductory overview of bioinformatics for computer scientists was presented by Cohen [Coh04]. Recent textbooks on bioinformatics include Krane and Raymer [KR03], Jones and Pevzner [JP04], Durbin, Eddy, Krogh and Mitchison [DEKM98], Setubal and Meidanis [SM97], Orengo, Jones, and Thornton [OJT03], and Pevzner [Pev03]. Summaries of biological data analysis methods and algorithms can also be found in many other books, such as Gusfield [Gus97], Waterman [Wat95], Baldi and Brunak [BB01], and Baxevanis and Ouellette [BO04]. There are many books on scientific data analysis, such as Grossman, Kamath, Kegelmeyer, et al. (eds.) [GKK01]. For geographic data mining, see the book edited by Miller and Han [MH01]. Valdes-Perez [VP99] discusses the principles of human-computer collaboration for knowledge discovery in science. For intrusion detection, see Barbará [Bar02] and Northcutt and Novak [NN02].", "title": "" }, { "docid": "4933a947f4b0b9a0ca506d50f2010eaf", "text": "For integers <i>k</i>≥1 and <i>n</i>≥2<i>k</i>+1, the <em>Kneser graph</em> <i>K</i>(<i>n</i>,<i>k</i>) is the graph whose vertices are the <i>k</i>-element subsets of {1,…,<i>n</i>} and whose edges connect pairs of subsets that are disjoint. The Kneser graphs of the form <i>K</i>(2<i>k</i>+1,<i>k</i>) are also known as the <em>odd graphs</em>. We settle an old problem due to Meredith, Lloyd, and Biggs from the 1970s, proving that for every <i>k</i>≥3, the odd graph <i>K</i>(2<i>k</i>+1,<i>k</i>) has a Hamilton cycle. This and a known conditional result due to Johnson imply that all Kneser graphs of the form <i>K</i>(2<i>k</i>+2<sup><i>a</i></sup>,<i>k</i>) with <i>k</i>≥3 and <i>a</i>≥0 have a Hamilton cycle. We also prove that <i>K</i>(2<i>k</i>+1,<i>k</i>) has at least 2<sup>2<sup><i>k</i>−6</sup></sup> distinct Hamilton cycles for <i>k</i>≥6. Our proofs are based on a reduction of the Hamiltonicity problem in the odd graph to the problem of finding a spanning tree in a suitably defined hypergraph on Dyck words.", "title": "" }, { "docid": "1f1a8f5f7612e131ce7b99c13aa4d5db", "text": "Background subtraction can be treated as the binary classification problem of highlighting the foreground region in a video whilst masking the background region, and has been broadly applied in various vision tasks such as video surveillance and traffic monitoring. However, it still remains a challenging task due to complex scenes and for lack of the prior knowledge about the temporal information. In this paper, we propose a novel background subtraction model based on 3D convolutional neural networks (3D CNNs) which combines temporal and spatial information to effectively separate the foreground from all the sequences in an end-to-end manner. Different from conventional models, we view background subtraction as three-class classification problem, i.e., the foreground, the background and the boundary. This design can obtain more reasonable results than existing baseline models. Experiments on the Change Detection 2012 dataset verify the potential of our model in both quantity and quality.", "title": "" }, { "docid": "afc5259cfa23aa94dd032127d147dde9", "text": "This paper is a reflection of our experience with the specification and subsequent execution of model transformations in the QVT core and Relations languages. Since this technology for executing transformations written in high-level, declarative specification languages is of very recent date, we observe that there is little knowledge available on how to write such declarative model transformations. Consequently, there is a need for a body of knowledge on transformation engineering. With this paper we intend to make an initial contribution to this emerging discipline. Based on our experiences we propose a number of useful design patterns for transformation specification. In addition we provide a method for specifying such transformation patterns in QVT, such that others can add their own patterns to a catalogue and the body of knowledge can grow as experience is built up. Finally, we illustrate how these patterns can be used in the specification of complex transformations.", "title": "" }, { "docid": "4eb937f806ca01268b5ed1348d0cc40c", "text": "The paradigms of transformational planning, case-based planning, and plan debugging all involve a process known as plan adaptation | modifying or repairing an old plan so it solves a new problem. In this paper we provide a domain-independent algorithm for plan adaptation, demonstrate that it is sound, complete, and systematic, and compare it to other adaptation algorithms in the literature. Our approach is based on a view of planning as searching a graph of partial plans. Generative planning starts at the graph's root and moves from node to node using planre nement operators. In planning by adaptation, a library plan|an arbitrary node in the plan graph|is the starting point for the search, and the plan-adaptation algorithm can apply both the same re nement operators available to a generative planner and can also retract constraints and steps from the plan. Our algorithm's completeness ensures that the adaptation algorithm will eventually search the entire graph and its systematicity ensures that it will do so without redundantly searching any parts of the graph.", "title": "" }, { "docid": "4dd28201b87acf7705ea91f9e9e4a330", "text": "Because individual crowd workers often exhibit high variance in annotation accuracy, we often ask multiple crowd workers to label each example to infer a single consensus label. While simple majority vote computes consensus by equally weighting each worker’s vote, weighted voting assigns greater weight to more accurate workers, where accuracy is estimated by inner-annotator agreement (unsupervised) and/or agreement with known expert labels (supervised). In this paper, we investigate the annotation cost vs. consensus accuracy benefit from increasing the amount of expert supervision. To maximize benefit from supervision, we propose a semi-supervised approach which infers consensus labels using both labeled and unlabeled examples. We compare our semi-supervised approach with several existing unsupervised and supervised baselines, evaluating on both synthetic data and Amazon Mechanical Turk data. Results show (a) a very modest amount of supervision can provide significant benefit, and (b) consensus accuracy from full supervision with a large amount of labeled data is matched by our semi-supervised approach with much less supervision.", "title": "" }, { "docid": "a52a90bb69f303c4a31e4f24daf609e6", "text": "The effects of Arctium lappa L. (root) on anti-inflammatory and free radical scavenger activity were investigated. Subcutaneous administration of A. lappa crude extract significantly decreased carrageenan-induced rat paw edema. When simultaneously treated with CCl4, it produced pronounced activities against CCl4-induced acute liver damage. The free radical scavenging activity of its crude extract was also examined by means of an electron spin resonance (ESR) spectrometer. The IC50 of A. lappa extract on superoxide and hydroxyl radical scavenger activity was 2.06 mg/ml and 11.8 mg/ml, respectively. These findings suggest that Arctium lappa possess free radical scavenging activity. The inhibitory effects on carrageenan-induced paw edema and CCl4-induced hepatotoxicity could be due to the scavenging effect of A. lappa.", "title": "" }, { "docid": "a078933ffbb2f0488b3b425b78fb7dd0", "text": "Semantic Role Labeling (SRL) has become one of the standard tasks of natural language processing and proven useful as a source of information for a number of other applications. We address the problem of transferring an SRL model from one language to another using a shared feature representation. This approach is then evaluated on three language pairs, demonstrating competitive performance as compared to a state-of-the-art unsupervised SRL system and a cross-lingual annotation projection baseline. We also consider the contribution of different aspects of the feature representation to the performance of the model and discuss practical applicability of this method. 1 Background and Motivation Semantic role labeling has proven useful in many natural language processing tasks, such as question answering (Shen and Lapata, 2007; Kaisser and Webber, 2007), textual entailment (Sammons et al., 2009), machine translation (Wu and Fung, 2009; Liu and Gildea, 2010; Gao and Vogel, 2011) and dialogue systems (Basili et al., 2009; van der Plas et al., 2009). Multiple models have been designed to automatically predict semantic roles, and a considerable amount of data has been annotated to train these models, if only for a few more popular languages. As the annotation is costly, one would like to leverage existing resources to minimize the human effort required to construct a model for a new language. A number of approaches to the construction of semantic role labeling models for new languages have been proposed. On one end of the scale is unsupervised SRL, such as Grenager and Manning (2006), which requires some expert knowledge, but no labeled data. It clusters together arguments that should bear the same semantic role, but does not assign a particular role to each cluster. On the other end is annotating a new dataset from scratch. There are also intermediate options, which often make use of similarities between languages. This way, if an accurate model exists for one language, it should help simplify the construction of a model for another, related language. The approaches in this third group often use parallel data to bridge the gap between languages. Cross-lingual annotation projection systems (Padó and Lapata, 2009), for example, propagate information directly via word alignment links. However, they are very sensitive to the quality of parallel data, as well as the accuracy of a sourcelanguage model on it. An alternative approach, known as cross-lingual model transfer, or cross-lingual model adaptation, consists of modifying a source-language model to make it directly applicable to a new language. This usually involves constructing a shared feature representation across the two languages. McDonald et al. (2011) successfully apply this idea to the transfer of dependency parsers, using part-ofspeech tags as the shared representation of words. A later extension of Täckström et al. (2012) enriches this representation with cross-lingual word clusters, considerably improving the performance. In the case of SRL, a shared representation that is purely syntactic is likely to be insufficient, since structures with different semantics may be realized by the same syntactic construct, for example “in August” vs “in Britain”. However with the help of recently introduced cross-lingual word represen-", "title": "" }, { "docid": "11ad0993b62e016175638d80f9acd694", "text": "Progressive macular hypomelanosis (PMH) is a skin disorder that is characterized by hypopigmented macules and usually seen in young adults. The skin microbiota, in particular the bacterium Propionibacterium acnes, is suggested to play a role. Here, we compared the P. acnes population of 24 PMH lesions from eight patients with corresponding nonlesional skin of the patients and matching control samples from eight healthy individuals using an unbiased, culture-independent next-generation sequencing approach. We also compared the P. acnes population before and after treatment with a combination of lymecycline and benzoylperoxide. We found an association of one subtype of P. acnes, type III, with PMH. This type was predominant in all PMH lesions (73.9% of reads in average) but only detected as a minor proportion in matching control samples of healthy individuals (14.2% of reads in average). Strikingly, successful PMH treatment is able to alter the composition of the P. acnes population by substantially diminishing the proportion of P. acnes type III. Our study suggests that P. acnes type III may play a role in the formation of PMH. Furthermore, it sheds light on substantial differences in the P. acnes phylotype distribution between the upper and lower back and abdomen in healthy individuals.", "title": "" }, { "docid": "d4acd79e2fdbc9b87b2dbc6ebfa2dd43", "text": "Airbnb, an online marketplace for accommodations, has experienced a staggering growth accompanied by intense debates and scattered regulations around the world. Current discourses, however, are largely focused on opinions rather than empirical evidences. Here, we aim to bridge this gap by presenting the first large-scale measurement study on Airbnb, using a crawled data set containing 2.3 million listings, 1.3 million hosts, and 19.3 million reviews. We measure several key characteristics at the heart of the ongoing debate and the sharing economy. Among others, we find that Airbnb has reached a global yet heterogeneous coverage. The majority of its listings across many countries are entire homes, suggesting that Airbnb is actually more like a rental marketplace rather than a spare-room sharing platform. Analysis on star-ratings reveals that there is a bias toward positive ratings, amplified by a bias toward using positive words in reviews. The extent of such bias is greater than Yelp reviews, which were already shown to exhibit a positive bias. We investigate a key issue - commercial hosts who own multiple listings on Airbnb - repeatedly discussed in the current debate. We find that their existence is prevalent, they are early movers towards joining Airbnb, and their listings are disproportionately entire homes and located in the US. Our work advances the current understanding of how Airbnb is being used and may serve as an independent and empirical reference to inform the debate.", "title": "" }, { "docid": "887665ab7f043987b3373628d9cf6021", "text": "In isolated converter, transformer is a main path of common mode current. Methods of how to reduce the noise through transformer have been widely studied. One effective technique is using shield between primary and secondary winding. In this paper, EMI noise transferring path and EMI model for typical isolated converters are analyzed. And the survey about different methods of shielding is discussed. Their pros and cons are analyzed. Then the balance concept is introduced and our proposed double shielding using balance concept for wire winding transformer is raised. It can control the parasitic capacitance accurately and is easy to manufacturing. Next, a newly proposed single layer shielding for PCB winding transformer is discussed. The experiment results are provided to verify the methods.", "title": "" }, { "docid": "0281c96d3990df1159d58c6b5707b1ad", "text": "In the Big Data community, MapReduce has been seen as one of the key enabling approaches for meeting continuously increasing demands on computing resources imposed by massive data sets. The reason for this is the high scalability of the MapReduce paradigm which allows for massively parallel and distributed execution over a large number of computing nodes. This paper identifies MapReduce issues and challenges in handling Big Data with the objective of providing an overview of the field, facilitating better planning and management of Big Data projects, and identifying opportunities for future research in this field. The identified challenges are grouped into four main categories corresponding to Big Data tasks types: data storage (relational databases and NoSQL stores), Big Data analytics (machine learning and interactive analytics), online processing, and security and privacy. Moreover, current efforts aimed at improving and extending MapReduce to address identified challenges are presented. Consequently, by identifying issues and challenges MapReduce faces when handling Big Data, this study encourages future Big Data research.", "title": "" }, { "docid": "a354949d97de673e71510618a604e264", "text": "Fast Magnetic Resonance Imaging (MRI) is highly in demand for many clinical applications in order to reduce the scanning cost and improve the patient experience. This can also potentially increase the image quality by reducing the motion artefacts and contrast washout. However, once an image field of view and the desired resolution are chosen, the minimum scanning time is normally determined by the requirement of acquiring sufficient raw data to meet the Nyquist–Shannon sampling criteria. Compressive Sensing (CS) theory has been perfectly matched to the MRI scanning sequence design with much less required raw data for the image reconstruction. Inspired by recent advances in deep learning for solving various inverse problems, we propose a conditional Generative Adversarial Networks-based deep learning framework for de-aliasing and reconstructing MRI images from highly undersampled data with great promise to accelerate the data acquisition process. By coupling an innovative content loss with the adversarial loss our de-aliasing results are more realistic. Furthermore, we propose a refinement learning procedure for training the generator network, which can stabilise the training with fast convergence and less parameter tuning. We demonstrate that the proposed framework outperforms state-of-the-art CS-MRI methods, in terms of reconstruction error and perceptual image quality. In addition, our method can reconstruct each image in 0.22ms–0.37ms, which is promising for real-time applications.", "title": "" }, { "docid": "4997de0d1663a8362fb47abcf9e34df9", "text": "Our goal is to segment a video sequence into moving objects and the world scene. In recent work, spectral embedding of point trajectories based on 2D motion cues accumulated from their lifespans, has shown to outperform factorization and per frame segmentation methods for video segmentation. The scale and kinematic nature of the moving objects and the background scene determine how close or far apart trajectories are placed in the spectral embedding. Such density variations may confuse clustering algorithms, causing over-fragmentation of object interiors. Therefore, instead of clustering in the spectral embedding, we propose detecting discontinuities of embedding density between spatially neighboring trajectories. Detected discontinuities are strong indicators of object boundaries and thus valuable for video segmentation. We propose a novel embedding discretization process that recovers from over-fragmentations by merging clusters according to discontinuity evidence along inter-cluster boundaries. For segmenting articulated objects, we combine motion grouping cues with a center-surround saliency operation, resulting in “context-aware”, spatially coherent, saliency maps. Figure-ground segmentation obtained from saliency thresholding, provides object connectedness constraints that alter motion based trajectory affinities, by keeping articulated parts together and separating disconnected in time objects. Finally, we introduce Gabriel graphs as effective per frame superpixel maps for converting trajectory clustering to dense image segmentation. Gabriel edges bridge large contour gaps via geometric reasoning without over-segmenting coherent image regions. We present experimental results of our method that outperform the state-of-the-art in challenging motion segmentation datasets.", "title": "" }, { "docid": "774bf4b0a2c8fe48607e020da2737041", "text": "A class of three-dimensional planar arrays in substrate integrated waveguide (SIW) technology is proposed, designed and demonstrated with 8 × 16 elements at 35 GHz for millimeter-wave imaging radar system applications. Endfire element is generally chosen to ensure initial high gain and broadband characteristics for the array. Fermi-TSA (tapered slot antenna) structure is used as element to reduce the beamwidth. Corrugation is introduced to reduce the resulting antenna physical width without degradation of performance. The achieved measured gain in our demonstration is about 18.4 dBi. A taper shaped air gap in the center is created to reduce the coupling between two adjacent elements. An SIW H-to-E-plane vertical interconnect is proposed in this three-dimensional architecture and optimized to connect eight 1 × 16 planar array sheets to the 1 × 8 final network. The overall architecture is exclusively fabricated by the conventional PCB process. Thus, the developed SIW feeder leads to a significant reduction in both weight and cost, compared to the metallic waveguide-based counterpart. A complete antenna structure is designed and fabricated. The planar array ensures a gain of 27 dBi with low SLL of 26 dB and beamwidth as narrow as 5.15 degrees in the E-plane and 6.20 degrees in the 45°-plane.", "title": "" }, { "docid": "3888dd754c9f7607d7a4cc2f4a436aac", "text": "We propose a distributed algorithm to estimate the 3D trajectories of multiple cooperative robots from relative pose measurements. Our approach leverages recent results [1] which show that the maximum likelihood trajectory is well approximated by a sequence of two quadratic subproblems. The main contribution of the present work is to show that these subproblems can be solved in a distributed manner, using the distributed Gauss-Seidel (DGS) algorithm. Our approach has several advantages. It requires minimal information exchange, which is beneficial in presence of communication and privacy constraints. It has an anytime flavor: after few iterations the trajectory estimates are already accurate, and they asymptotically convergence to the centralized estimate. The DGS approach scales well to large teams, and it has a straightforward implementation. We test the approach in simulations and field tests, demonstrating its advantages over related techniques.", "title": "" }, { "docid": "9420760d6945440048cee3566ce96699", "text": "In this work, we develop a computer vision based fall prevention system for hospital ward application. To prevent potential falls, once the event of patient get up from the bed is automatically detected, nursing staffs are alarmed immediately for assistance. For the detection task, we use a RGBD sensor (Microsoft Kinect). The geometric prior knowledge is exploited by identifying a set of task-specific feature channels, e.g., regions of interest. Extensive motion and shape features from both color and depth image sequences are extracted. Features from multiple modalities and channels are fused via a multiple kernel learning framework for training the event detector. Experimental results demonstrate the high accuracy and efficiency achieved by the proposed system.", "title": "" }, { "docid": "9d8f18265d729a98553f89a8b337e6a0", "text": "Scalable Network Forensics by Matthias Vallentin Doctor of Philosophy in Computer Science University of California, Berkeley Professor Vern Paxson, Chair Network forensics and incident response play a vital role in site operations, but for large networks can pose daunting difficulties to cope with the ever-growing volume of activity and resulting logs. On the one hand, logging sources can generate tens of thousands of events per second, which a system supporting comprehensive forensics must somehow continually ingest. On the other hand, operators greatly benefit from interactive exploration of disparate types of activity when analyzing an incident, which often leaves network operators scrambling to ferret out answers to key questions: How did the attackers get in? What did they do once inside? Where did they come from? What activity patterns serve as indicators reflecting their presence? How do we prevent this attack in the future? Operators can only answer such questions by drawing upon high-quality descriptions of past activity recorded over extended time. A typical analysis starts with a narrow piece of intelligence, such as a local system exhibiting questionable behavior, or a report from another site describing an attack they detected. The analyst then tries to locate the described behavior by examining past activity, often cross-correlating information of different types to build up additional context. Frequently, this process in turn produces new leads to explore iteratively (“peeling the onion”), continuing and expanding until ultimately the analyst converges on as complete of an understanding of the incident as they can extract from the available information. This process, however, remains manual and time-consuming, as no single storage system efficiently integrates the disparate sources of data that investigations often involve. While standard Security Information and Event Management (SIEM) solutions aggregate logs from different sources into a single database, their data models omit crucial semantics, and they struggle to scale to the data rates that large-scale environments require.", "title": "" }, { "docid": "b7da2182bbdf69c46ffba20b272fab02", "text": "Social Media is playing a key role in today's society. Many of the events that are taking place in diverse human activities could be explained by the study of these data. Big Data is a relatively new parading in Computer Science that is gaining increasing interest by the scientific community. Big Data Predictive Analytics is a Big Data discipline that is mostly used to analyze what is in the huge amounts of data and then perform predictions based on such analysis using advanced mathematics and computing techniques. The study of Social Media Data involves disciplines like Natural Language Processing, by the integration of this area to academic studies, useful findings have been achieved. Social Network Rating Systems are online platforms that allow users to know about goods and services, the way in how users review and rate their experience is a field of evolving research. This paper presents a deep investigation in the state of the art of these areas to discover and analyze the current status of the research that has been developed so far by academics of diverse background.", "title": "" } ]
scidocsrr
803392004352b72103594ea25acf9906
Controller design for a bipedal walking robot using variable stiffness actuators
[ { "docid": "2997be0d8b1f7a183e006eba78135b13", "text": "The basic mechanics of human locomotion are associated with vaulting over stiff legs in walking and rebounding on compliant legs in running. However, while rebounding legs well explain the stance dynamics of running, stiff legs cannot reproduce that of walking. With a simple bipedal spring-mass model, we show that not stiff but compliant legs are essential to obtain the basic walking mechanics; incorporating the double support as an essential part of the walking motion, the model reproduces the characteristic stance dynamics that result in the observed small vertical oscillation of the body and the observed out-of-phase changes in forward kinetic and gravitational potential energies. Exploring the parameter space of this model, we further show that it not only combines the basic dynamics of walking and running in one mechanical system, but also reveals these gaits to be just two out of the many solutions to legged locomotion offered by compliant leg behaviour and accessed by energy or speed.", "title": "" } ]
[ { "docid": "8a9489ed62cfa4169b53647b7a51d979", "text": "We present MAESTRO, a framework to describe and analyze CNN dataflows, and predict performance and energy-efficiency when running neural network layers across various hardware configurations. This includes two components: (i) a concise language to describe arbitrary dataflows and (ii) and analysis framework that accepts the dataflow description, hardware resource description, and DNN layer description as inputs and generates buffer requirements, buffer access counts, network-on-chip (NoC) bandwidth requirements, and roofline performance information. We demonstrate both components across several dataflows as case studies.", "title": "" }, { "docid": "42faf2c0053c9f6a0147fc66c8e4c122", "text": "IN 1921, Gottlieb's discovery of the epithelial attachment of the gingiva opened new horizons which served as the basis for a better understanding of the biology of the dental supporting tissues in health and disease. Three years later his pupils, Orban and Kohler (1924), undertook the task of measuring the epithelial attachment as well as the surrounding tissue relations during the four phases of passive eruption of the tooth. Gottlieb and Orban's descriptions of the epithelial attachment unveiled the exact morphology of this epithelial structure, and clarified the relation of this", "title": "" }, { "docid": "1ceab925041160f17163940360354c55", "text": "A complete reconstruction of D.H. Lehmer’s ENIAC set-up for computing the exponents of p modulo 2 is given. This program served as an early test program for the ENIAC (1946). The reconstruction illustrates the difficulties of early programmers to find a way between a man operated and a machine operated computation. These difficulties concern both the content level (the algorithm) and the formal level (the logic of sequencing operations).", "title": "" }, { "docid": "f734b6fc215e8da00641820a2b627be9", "text": "We propose a novel traffic sign detection system that simultaneously estimates the location and precise boundary of traffic signs using convolutional neural network (CNN). Estimating the precise boundary of traffic signs is important in navigation systems for intelligent vehicles where traffic signs can be used as 3-D landmarks for road environment. Previous traffic sign detection systems, including recent methods based on CNN, only provide bounding boxes of traffic signs as output, and thus requires additional processes such as contour estimation or image segmentation to obtain the precise boundary of signs. In this paper, the boundary estimation of traffic sign is formulated as 2-D pose and shape class prediction problem, and this is effectively solved by a single CNN. With the predicted 2-D pose and the shape class of a target traffic sign in the input, we estimate the actual boundary of the target sign by projecting the boundary of a corresponding template sign image into the input image plane. By formulating the boundary estimation problem as a CNN-based pose and shape prediction task, our method is end-to-end trainable, and more robust to occlusion and small targets than other boundary estimation methods that rely on contour estimation or image segmentation. With our architectural optimization of the CNN-based traffic sign detection network, the proposed method shows a detection frame rate higher than seven frames/second while providing highly accurate and robust traffic sign detection and boundary estimation results on a low-power mobile platform.", "title": "" }, { "docid": "b5bb280c7ce802143a86b9261767d9a6", "text": "Existing person re-identification (re-id) benchmarks and algorithms mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be found from whole images. To close the gap, we investigate how to localize and match query persons from the scene images without relying on the annotations of candidate boxes. Instead of breaking it down into two separate tasks—pedestrian detection and person re-id, we propose an end-to-end deep learning framework to jointly handle both tasks. A random sampling softmax loss is proposed to effectively train the model under the supervision of sparse and unbalanced labels. On the other hand, existing benchmarks are small in scale and the samples are collected from a few fixed camera views with low scene diversities. To address this issue, we collect a largescale and scene-diversified person search dataset, which contains 18,184 images, 8,432 persons, and 99,809 annotated bounding boxes1. We evaluate our approach and other baselines on the proposed dataset, and study the influence of various factors. Experiments show that our method achieves the best result.", "title": "" }, { "docid": "1768ecf6a2d8a42ea701d7f242edb472", "text": "Satisfaction prediction is one of the prime concerns in search performance evaluation. It is a non-trivial task for two major reasons: (1) The definition of satisfaction is rather subjective and different users may have different opinions in satisfaction judgement. (2) Most existing studies on satisfaction prediction mainly rely on users' click-through or query reformulation behaviors but there are many sessions without such kind of interactions. To shed light on these research questions, we construct an experimental search engine that could collect users' satisfaction feedback as well as mouse click-through/movement data. Different from existing studies, we compare for the first time search users' and external assessors' opinions on satisfaction. We find that search users pay more attention to the utility of results while external assessors emphasize on the efforts spent in search sessions. Inspired by recent studies in predicting result relevance based on mouse movement patterns (namely motifs), we propose to estimate the utilities of search results and the efforts in search sessions with motifs extracted from mouse movement data on search result pages (SERPs). Besides the existing frequency-based motif selection method, two novel selection strategies (distance-based and distribution-based) are also adopted to extract high quality motifs for satisfaction prediction. Experimental results on over 1,000 user sessions show that the proposed strategies outperform existing methods and also have promising generalization capability for different users and queries.", "title": "" }, { "docid": "7a72f69ad4926798e12f6fa8e598d206", "text": "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed ‘DeepLabv3’ system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.", "title": "" }, { "docid": "860bfe5785eaa759036121e63369c0e8", "text": "In this paper, a robust high speed low input impedance CMOS current comparator is proposed. The circuit uses modified Wilson current-mirror to perform a current subtraction. Negative feedback is employed to reduce input impedances of the circuit. The diode connected transistors of the same type (NMOS) are used at the output making the circuit immune to the process variation. HSPICE is used to verify the circuit performance and the results show the propagation delay of 1.67 nsec with an average power dissipation of 0.63 mW using a standard 0.5 /spl mu/m CMOS technology for an input current of /spl plusmn/0.1 /spl mu/A at the supply voltage of 3 V. The input impedances of the proposed current comparator are 123 /spl Omega/ and 126 /spl Omega/ while the maximum output voltage variation is only 1.9%.", "title": "" }, { "docid": "085ec38c3e756504be93ac0b94483cea", "text": "Low power wide area (LPWA) networks are making spectacular progress from design, standardization, to commercialization. At this time of fast-paced adoption, it is of utmost importance to analyze how well these technologies will scale as the number of devices connected to the Internet of Things inevitably grows. In this letter, we provide a stochastic geometry framework for modeling the performance of a single gateway LoRa network, a leading LPWA technology. Our analysis formulates the unique peculiarities of LoRa, including its chirp spread-spectrum modulation technique, regulatory limitations on radio duty cycle, and use of ALOHA protocol on top, all of which are not as common in today’s commercial cellular networks. We show that the coverage probability drops exponentially as the number of end-devices grows due to interfering signals using the same spreading sequence. We conclude that this fundamental limiting factor is perhaps more significant toward LoRa scalability than for instance spectrum restrictions. Our derivations for co-spreading factor interference found in LoRa networks enables rigorous scalability analysis of such networks.", "title": "" }, { "docid": "93bebbc1112dbfd34fce1b3b9d228f9a", "text": "UNLABELLED\nThere has been no established qualitative system of interpretation for therapy response assessment using PET/CT for head and neck cancers. The objective of this study was to validate the Hopkins interpretation system to assess therapy response and survival outcome in head and neck squamous cell cancer patients (HNSCC).\n\n\nMETHODS\nThe study included 214 biopsy-proven HNSCC patients who underwent a posttherapy PET/CT study, between 5 and 24 wk after completion of treatment. The median follow-up was 27 mo. PET/CT studies were interpreted by 3 nuclear medicine physicians, independently. The studies were scored using a qualitative 5-point scale, for the primary tumor, for the right and left neck, and for overall assessment. Scores 1, 2, and 3 were considered negative for tumors, and scores 4 and 5 were considered positive for tumors. The Cohen κ coefficient (κ) was calculated to measure interreader agreement. Overall survival (OS) and progression-free survival (PFS) were analyzed by Kaplan-Meier plots with a Mantel-Cox log-rank test and Gehan Breslow Wilcoxon test for comparisons.\n\n\nRESULTS\nOf the 214 patients, 175 were men and 39 were women. There was 85.98%, 95.33%, 93.46%, and 87.38% agreement between the readers for overall, left neck, right neck, and primary tumor site response scores, respectively. The corresponding κ coefficients for interreader agreement between readers were, 0.69-0.79, 0.68-0.83, 0.69-0.87, and 0.79-0.86 for overall, left neck, right neck, and primary tumor site response, respectively. The sensitivity, specificity, positive predictive value, negative predictive value, and overall accuracy of the therapy assessment were 68.1%, 92.2%, 71.1%, 91.1%, and 86.9%, respectively. Cox multivariate regression analysis showed human papillomavirus (HPV) status and PET/CT interpretation were the only factors associated with PFS and OS. Among the HPV-positive patients (n = 123), there was a significant difference in PFS (hazard ratio [HR], 0.14; 95% confidence interval, 0.03-0.57; P = 0.0063) and OS (HR, 0.01; 95% confidence interval, 0.00-0.13; P = 0.0006) between the patients who had a score negative for residual tumor versus positive for residual tumor. A similar significant difference was observed in PFS and OS for all patients. There was also a significant difference in the PFS of patients with PET-avid residual disease in one site versus multiple sites in the neck (HR, 0.23; log-rank P = 0.004).\n\n\nCONCLUSION\nThe Hopkins 5-point qualitative therapy response interpretation criteria for head and neck PET/CT has substantial interreader agreement and excellent negative predictive value and predicts OS and PFS in patients with HPV-positive HNSCC.", "title": "" }, { "docid": "f82eb2d4cc45577f08c7e867bf012816", "text": "OBJECTIVE\nThe purpose of this study was to compare the retrieval characteristics of the Option Elite (Argon Medical, Plano, Tex) and Denali (Bard, Tempe, Ariz) retrievable inferior vena cava filters (IVCFs), two filters that share a similar conical design.\n\n\nMETHODS\nA single-center, retrospective study reviewed all Option and Denali IVCF removals during a 36-month period. Attempted retrievals were classified as advanced if the routine \"snare and sheath\" technique was initially unsuccessful despite multiple attempts or an alternative endovascular maneuver or access site was used. Patient and filter characteristics were documented.\n\n\nRESULTS\nIn our study, 63 Option and 45 Denali IVCFs were retrieved, with an average dwell time of 128.73 and 99.3 days, respectively. Significantly higher median fluoroscopy times were experienced in retrieving the Option filter compared with the Denali filter (12.18 vs 6.85 minutes; P = .046). Use of adjunctive techniques was also higher in comparing the Option filter with the Denali filter (19.0% vs 8.7%; P = .079). No significant difference was noted between these groups in regard to gender, age, or history of malignant disease.\n\n\nCONCLUSIONS\nOption IVCF retrieval procedures required significantly longer retrieval fluoroscopy time compared with Denali IVCFs. Although procedure time was not analyzed in this study, as a surrogate, the increased fluoroscopy time may also have an impact on procedural direct costs and throughput.", "title": "" }, { "docid": "eb4f7427eb73ac0a0486e8ecb2172b52", "text": "In this work we propose the use of a modified version of the correlation coefficient as a performance criterion for the image alignment problem. The proposed modification has the desirable characteristic of being invariant with respect to photometric distortions. Since the resulting similarity measure is a nonlinear function of the warp parameters, we develop two iterative schemes for its maximization, one based on the forward additive approach and the second on the inverse compositional method. As it is customary in iterative optimization, in each iteration the nonlinear objective function is approximated by an alternative expression for which the corresponding optimization is simple. In our case we propose an efficient approximation that leads to a closed form solution (per iteration) which is of low computational complexity, the latter property being particularly strong in our inverse version. The proposed schemes are tested against the forward additive Lucas-Kanade and the simultaneous inverse compositional algorithm through simulations. Under noisy conditions and photometric distortions our forward version achieves more accurate alignments and exhibits faster convergence whereas our inverse version has similar performance as the simultaneous inverse compositional algorithm but at a lower computational complexity.", "title": "" }, { "docid": "2dda75184e2c9c5507c75f84443fff08", "text": "Text classification can help users to effectively handle and exploit useful information hidden in large-scale documents. However, the sparsity of data and the semantic sensitivity to context often hinder the classification performance of short texts. In order to overcome the weakness, we propose a unified framework to expand short texts based on word embedding clustering and convolutional neural network (CNN). Empirically, the semantically related words are usually close to each other in embedding spaces. Thus, we first discover semantic cliques via fast clustering. Then, by using additive composition over word embeddings from context with variable window width, the representations of multi-scale semantic units1 in short texts are computed. In embedding spaces, the restricted nearest word embeddings (NWEs)2 of the semantic units are chosen to constitute expanded matrices, where the semantic cliques are used as supervision information. Finally, for a short text, the projected matrix 3 and expanded matrices are combined and fed into CNN in parallel. Experimental results on two open benchmarks validate the effectiveness of the proposed method.", "title": "" }, { "docid": "8de5b77f3cb4f1c20ff6cc11b323ba9c", "text": "The Internet of Things (IoT) paradigm refers to the network of physical objects or \"things\" embedded with electronics, software, sensors, and connectivity to enable objects to exchange data with servers, centralized systems, and/or other connected devices based on a variety of communication infrastructures. IoT makes it possible to sense and control objects creating opportunities for more direct integration between the physical world and computer-based systems. IoT will usher automation in a large number of application domains, ranging from manufacturing and energy management (e.g. SmartGrid), to healthcare management and urban life (e.g. SmartCity). However, because of its finegrained, continuous and pervasive data acquisition and control capabilities, IoT raises concerns about the security and privacy of data. Deploying existing data security solutions to IoT is not straightforward because of device heterogeneity, highly dynamic and possibly unprotected environments, and large scale. In this talk, after outlining key challenges in data security and privacy, we present initial approaches to securing IoT data, including efficient and scalable encryption protocols, software protection techniques for small devices, and fine-grained data packet loss analysis for sensor networks.", "title": "" }, { "docid": "07e54849ceae5e425b106619e760e522", "text": "In this paper, we propose a novel approach to interpret a well-trained classification model through systematically investigating effects of its hidden units on prediction making. We search for the core hidden units responsible for predicting inputs as the class of interest under the generative Bayesian inference framework. We model such a process of unit selection as an Indian Buffet Process, and derive a simplified objective function via the MAP asymptotic technique. The induced binary optimization problem is efficiently solved with a continuous relaxation method by attaching a Switch Gate layer to the hidden layers of interest. The resulted interpreter model is thus end-to-end optimized via standard gradient back-propagation. Experiments are conducted with two popular deep convolutional classifiers, respectively well-trained on the MNIST dataset and the CIFAR10 dataset. The results demonstrate that the proposed interpreter successfully finds the core hidden units most responsible for prediction making. The modified model, only with the selected units activated, can hold correct predictions at a high rate. Besides, this interpreter model is also able to extract the most informative pixels in the images by connecting a Switch Gate layer to the input layer.", "title": "" }, { "docid": "7cf2c2ce9edff28880bc399e642cee44", "text": "This paper provides new results and insights for tracking an extended target object modeled with an Elliptic Random Hypersurface Model (RHM). An Elliptic RHM specifies the relative squared Mahalanobis distance of a measurement source to the center of the target object by means of a one-dimensional random scaling factor. It is shown that uniformly distributed measurement sources on an ellipse lead to a uniformly distributed squared scaling factor. Furthermore, a Bayesian inference mechanisms tailored to elliptic shapes is introduced, which is also suitable for scenarios with high measurement noise. Closed-form expressions for the measurement update in case of Gaussian and uniformly distributed squared scaling factors are derived.", "title": "" }, { "docid": "c19f986d747f4d6a3448607f76d961ab", "text": "We propose Stochastic Neural Architecture Search (SNAS), an economical endto-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of backpropagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes fewer epochs to find a cell architecture with state-of-theart accuracy than non-differentiable evolution-based and reinforcement-learningbased NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets.", "title": "" }, { "docid": "5fd66116021e4d86d3937e7a5b595975", "text": "The notion of disentangled autoencoders was proposed as an extension to the variational autoencoder by introducing a disentanglement parameter β, controlling the learning pressure put on the possible underlying latent representations. For certain values of β this kind of autoencoders is capable of encoding independent input generative factors in separate elements of the code, leading to a more interpretable and predictable model behaviour. In this paper we quantify the effects of the parameter β on the model performance and disentanglement. After training multiple models with the same value of β, we establish the existence of consistent variance in one of the disentanglement measures, proposed in literature. The negative consequences of the disentanglement to the autoencoder’s discriminative ability are also asserted while varying the amount of examples available during training.", "title": "" }, { "docid": "8a08bb5a952589615c9054d4fc0e8c1f", "text": "The classical plain-text representation of source code is c onvenient for programmers but requires parsing to uncover t he deep structure of the program. While sophisticated software too ls parse source code to gain access to the program’s structur e, many lightweight programming aids such as grep rely instead on only the lexical structure of source code. I d escribe a new XML application that provides an alternative representation o f Java source code. This XML-based representation, called J avaML, is more natural for tools and permits easy specification of nume rous software-engineering analyses by leveraging the abun dance of XML tools and techniques. A robust converter built with th e Jikes Java compiler framework translates from the classic l Java source code representation to JavaML, and an XSLT style sheet converts from JavaML back into the classical textual f orm.", "title": "" }, { "docid": "9eacc5f0724ff8fe2152930980dded4b", "text": "A computer-controlled adjustable nanosecond pulse generator based on high-voltage MOSFET is designed in this paper, which owns stable performance and miniaturization profile of 32×30×7 cm3. The experiment results show that the pulser can generate electrical pulse with Gaussian rising time of 20 nanosecond, section-adjustable index falling time of 40–200 nanosecond, continuously adjustable repitition frequency of 0–5 kHz, quasi-continuously adjustable amplitude of 0–1 kV at 50 Ω load. And the pulser could meet the requiremen.", "title": "" } ]
scidocsrr
ee708f1e329ba7b807f3de3d89be05db
Energy Harvesting Electronics for Vibratory Devices in Self-Powered Sensors
[ { "docid": "6126a101cf55448f0c9ac4dbf98bc690", "text": "This paper studies the energy conversion efficiency for a rectified piezoelectric power harvester. An analytical model is proposed, and an expression of efficiency is derived under steady-state operation. In addition, the relationship among the conversion efficiency, electrically induced damping and ac–dc power output is established explicitly. It is shown that the optimization criteria are different depending on the relative strength of the coupling. For the weak electromechanical coupling system, the optimal power transfer is attained when the efficiency and induced damping achieve their maximum values. This result is consistent with that observed in the recent literature. However, a new finding shows that they are not simultaneously maximized in the strongly coupled electromechanical system.", "title": "" } ]
[ { "docid": "8a59e2b140eaf91a4a5fd8c109682543", "text": "A search-based procedural content generation (SBPCG) algorithm for strategy game maps is proposed. Two representations for strategy game maps are devised, along with a number of objectives relating to predicted player experience. A multiobjective evolutionary algorithm is used for searching the space of maps for candidates that satisfy pairs of these objectives. As the objectives are inherently partially conflicting, the algorithm generates Pareto fronts showing how these objectives can be balanced. Such fronts are argued to be a valuable tool for designers looking to balance various design needs. Choosing appropriate points (manually or automatically) on the Pareto fronts, maps can be found that exhibit good map design according to specified criteria, and could either be used directly in e.g. an RTS game or form the basis for further human design.", "title": "" }, { "docid": "ada35607fa56214e5df8928008735353", "text": "Osseous free flaps have become the preferred method for reconstructing segmental mandibular defects. Of 457 head and neck free flaps, 150 osseous mandible reconstructions were performed over a 10-year period. This experience was retrospectively reviewed to establish an approach to osseous free flap mandible reconstruction. There were 94 male and 56 female patients (mean age, 50 years; range 3 to 79 years); 43 percent had hemimandibular defects, and the rest had central, lateral, or a combination defect. Donor sites included the fibula (90 percent), radius (4 percent), scapula (4 percent), and ilium (2 percent). Rigid fixation (up to five osteotomy sites) was used in 98 percent of patients. Aesthetic and functional results were evaluated a minimum of 6 months postoperatively. The free flap success rate was 100 percent, and bony union was achieved in 97 percent of the osteotomy sites. Osseointegrated dental implants were placed in 20 patients. A return to an unrestricted diet was achieved in 45 percent of patients; 45 percent returned to a soft diet, and 5 percent were on a liquid diet. Five percent of patients required enteral feeding to maintain weight. Speech was assessed as normal (36 percent), near normal (27 percent), intelligible (28 percent), or unintelligible (9 percent). Aesthetic outcome was judged as excellent (32 percent), good (27 percent), fair (27 percent), or poor (14 percent). This study demonstrates a very high success rate, with good-to-excellent functional and aesthetic results using osseous free flaps for primary mandible reconstruction. The fibula donor site should be the first choice for most cases, particularly those with anterior or large bony defects requiring multiple osteotomies. Use of alternative donor sites (i.e., radius and scapula) is best reserved for cases with large soft-tissue and minimal bone requirements. The ilium is recommended only when other options are unavailable. Thoughtful flap selection and design should supplant the need for multiple, simultaneous free flaps and vein grafting in most cases.", "title": "" }, { "docid": "6a85677755a82b147cb0874ae8299458", "text": "Data mining involves the process of recovering related, significant and credential information from a large collection of aggregated data. A major area of current research in data mining is the field of clinical investigations that involve disease diagnosis, prognosis and drug therapy. The objective of this paper is to identify an efficient classifier for prognostic breast cancer data. This research work involves designing a data mining framework that incorporates the task of learning patterns and rules that will facilitate the formulation of decisions in new cases. The machine learning techniques employed to train the proposed system are based on feature relevance analysis and classification algorithms. Wisconsin Prognostic Breast Cancer (WPBC) data from the UCI machine learning repository is utilized by means of data mining techniques to completely train the system on 198 individual cases, each comprising of 33 predictor values. This paper highlights the performance of feature reduction and classification algorithms on the training dataset. We evaluate the number of attributes for split in the Random tree algorithm and the confidence level and minimum size of the leaves in the C4.5 algorithm to produce 100 percent classification accuracy. Our results demonstrate that Random Tree and Quinlan’s C4.5 classification algorithm produce 100 percent accuracy in the training and test phase of classification with proper evaluation of algorithmic parameters.", "title": "" }, { "docid": "6976614013c1aa550b5e506b1d1203e7", "text": "Here we present an overview of various techniques performed concomitantly during penile prosthesis surgery to enhance penile length and girth. We report on the technique of ventral phalloplasty and its outcomes along with augmentation corporoplasty, suprapubic lipectomy, suspensory ligament release, and girth enhancement procedures. For the serious implanter, outcomes can be improved by combining the use of techniques for each scar incision. These adjuvant procedures are a key addition in the armamentarium for the serious implant surgeon.", "title": "" }, { "docid": "5db5bed638cd8c5c629f9bebef556730", "text": "The health benefits of garlic likely arise from a wide variety of components, possibly working synergistically. The complex chemistry of garlic makes it plausible that variations in processing can yield quite different preparations. Highly unstable thiosulfinates, such as allicin, disappear during processing and are quickly transformed into a variety of organosulfur components. The efficacy and safety of these preparations in preparing dietary supplements based on garlic are also contingent on the processing methods employed. Although there are many garlic supplements commercially available, they fall into one of four categories, i.e., dehydrated garlic powder, garlic oil, garlic oil macerate and aged garlic extract (AGE). Garlic and garlic supplements are consumed in many cultures for their hypolipidemic, antiplatelet and procirculatory effects. In addition to these proclaimed beneficial effects, some garlic preparations also appear to possess hepatoprotective, immune-enhancing, anticancer and chemopreventive activities. Some preparations appear to be antioxidative, whereas others may stimulate oxidation. These additional biological effects attributed to AGE may be due to compounds, such as S-allylcysteine, S-allylmercaptocysteine, N(alpha)-fructosyl arginine and others, formed during the extraction process. Although not all of the active ingredients are known, ample research suggests that several bioavailable components likely contribute to the observed beneficial effects of garlic.", "title": "" }, { "docid": "707a31c60288fc2873bb37544bb83edf", "text": "The game of Go has a long history in East Asian countries, but the field of Computer Go has yet to catch up to humans until the past couple of years. While the rules of Go are simple, the strategy and combinatorics of the game are immensely complex. Even within the past couple of years, new programs that rely on neural networks to evaluate board positions still explore many orders of magnitude more board positions per second than a professional can. We attempt to mimic human intuition in the game by creating a convolutional neural policy network which, without any sort of tree search, should play the game at or above the level of most humans. We introduce three structures and training methods that aim to create a strong Go player: non-rectangular convolutions, which will better learn the shapes on the board, supervised learning, training on a data set of 53,000 professional games, and reinforcement learning, training on games played between different versions of the network. Our network has already surpassed the skill level of intermediate amateurs simply using supervised learning. Further training and implementation of non-rectangular convolutions and reinforcement learning will likely increase this skill level much further.", "title": "" }, { "docid": "c2bd875199c6da6ce0f7c46349c7c937", "text": "This chapter presents a survey of contemporary NLP research on Multiword Expressions (MWEs). MWEs pose a huge problem to precise language processing due to their idiosyncratic nature and diversity of their semantic, lexical, and syntactical properties. The chapter begins by considering MWEs definitions, describes some MWEs classes, indicates problems MWEs generate in language applications and their possible solutions, presents methods of MWE encoding in dictionaries and their automatic detection in corpora. The chapter goes into more detail on a particular MWE class called Verb-Noun Constructions (VNCs). Due to their frequency in corpus and unique characteristics, VNCs present a research problem in their own right. Having outlined several approaches to VNC representation in lexicons, the chapter explains the formalism of Lexical Function as a possible VNC representation. Such representation may serve as a tool for VNCs automatic detection in a corpus. The latter is illustrated on Spanish material applying some supervised learning methods commonly used for NLP tasks.", "title": "" }, { "docid": "4b878ffe2fd7b1f87e2f06321e5f03fa", "text": "Physical unclonable function (PUF) leverages the immensely complex and irreproducible nature of physical structures to achieve device authentication and secret information storage. To enhance the security and robustness of conventional PUFs, reconfigurable physical unclonable functions (RPUFs) with dynamically refreshable challenge-response pairs (CRPs) have emerged recently. In this paper, we propose two novel physically reconfigurable PUF (P-RPUF) schemes that exploit the process parameter variability and programming sensitivity of phase change memory (PCM) for CRP reconfiguration and evaluation. The first proposed PCM-based P-RPUF scheme extracts its CRPs from the measurable differences of the PCM cell resistances programmed by randomly varying pulses. An imprecisely controlled regulator is used to protect the privacy of the CRP in case the configuration state of the RPUF is divulged. The second proposed PCM-based RPUF scheme produces the random response by counting the number of programming pulses required to make the cell resistance converge to a predetermined target value. The merging of CRP reconfiguration and evaluation overcomes the inherent vulnerability of P-RPUF devices to malicious prediction attacks by limiting the number of accessible CRPs between two consecutive reconfigurations to only one. Both schemes were experimentally evaluated on 180-nm PCM chips. The obtained results demonstrated their quality for refreshable key generation when appropriate fuzzy extractor algorithms are incorporated.", "title": "" }, { "docid": "41e71a03c2abdd0fec78e8273709efa7", "text": "Logical correction of aging contour changes of the face is based on understanding its structure and the processes involved in the aging appearance. Aging changes are seen at all tissue levels between the skin and bone although the relative contribution of each component to the overall change of facial appearance has yet to be satisfactorily determined. Significantly, the facial skeleton changes profoundly with aging as a consequence of significant resorption of the bones of dental origin in particular. The resultant loss of skeletal projection gives the visual impression of descent while the reduced ligamentous support leads to laxity of the overlying soft tissues. Understanding the specific changes of the face with aging is fundamental to achieving optimum correction and safe use of injectables for facial rejuvenation.", "title": "" }, { "docid": "fe79c1c71112b3b40e047db6030aaff9", "text": "We are at a key juncture in history where biodiversity loss is occurring daily and accelerating in the face of population growth, climate change, and rampant development. Simultaneously, we are just beginning to appreciate the wealth of human health benefits that stem from experiencing nature and biodiversity. Here we assessed the state of knowledge on relationships between human health and nature and biodiversity, and prepared a comprehensive listing of reported health effects. We found strong evidence linking biodiversity with production of ecosystem services and between nature exposure and human health, but many of these studies were limited in rigor and often only correlative. Much less information is available to link biodiversity and health. However, some robust studies indicate that exposure to microbial biodiversity can improve health, specifically in reducing certain allergic and respiratory diseases. Overall, much more research is needed on mechanisms of causation. Also needed are a reenvisioning of land-use planning that places human well-being at the center and a new coalition of ecologists, health and social scientists and planners to conduct research and develop policies that promote human interaction with nature and biodiversity. Improvements in these areas should enhance human health and ecosystem, community, as well as human resilience. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).", "title": "" }, { "docid": "46cc4ab93b7b6dd28b81846b891ceb3f", "text": "This paper covers design, implementation and evaluation of a system that may be used to predict future stock prices basing on analysis of data from social media services. The authors took advantage of large datasets available from Twitter micro blogging platform and widely available stock market records. Data was collected during three months and processed for further analysis. Machine learning was employed to conduct sentiment classification of data coming from social networks in order to estimate future stock prices. Calculations were performed in distributed environment according to Map Reduce programming model. Evaluation and discussion of results of predictions for different time intervals and input datasets proved efficiency of chosen approach is discussed here.", "title": "" }, { "docid": "91283606a1737f3076ba6e00a6754fd1", "text": "OBJECTIVE\nTo review the quantitative instruments available to health service researchers who want to measure culture and cultural change.\n\n\nDATA SOURCES\nA literature search was conducted using Medline, Cinahl, Helmis, Psychlit, Dhdata, and the database of the King's Fund in London for articles published up to June 2001, using the phrase \"organizational culture.\" In addition, all citations and the gray literature were reviewed and advice was sought from experts in the field to identify instruments not found on the electronic databases. The search focused on instruments used to quantify culture with a track record, or potential for use, in health care settings.\n\n\nDATA EXTRACTION\nFor each instrument we examined the cultural dimensions addressed, the number of items for each questionnaire, the measurement scale adopted, examples of studies that had used the tool, the scientific properties of the instrument, and its strengths and limitations.\n\n\nPRINCIPAL FINDINGS\nThirteen instruments were found that satisfied our inclusion criteria, of which nine have a track record in studies involving health care organizations. The instruments varied considerably in terms of their grounding in theory, format, length, scope, and scientific properties.\n\n\nCONCLUSIONS\nA range of instruments with differing characteristics are available to researchers interested in organizational culture, all of which have limitations in terms of their scope, ease of use, or scientific properties. The choice of instrument should be determined by how organizational culture is conceptualized by the research team, the purpose of the investigation, intended use of the results, and availability of resources.", "title": "" }, { "docid": "542c115a46d263ee347702cf35b6193c", "text": "We obtain universal bounds on the energy of codes and for designs in Hamming spaces. Our bounds hold for a large class of potential functions, allow unified treatment, and can be viewed as a generalization of the Levenshtein bounds for maximal codes.", "title": "" }, { "docid": "8be48759b1ae6b7d65ff61ebc43dfee6", "text": "In this study, we introduce a household object dataset for recognition and manipulation tasks, focusing on commonly available objects in order to facilitate sharing of applications and algorithms. The core information available for each object consists of a 3D surface model annotated with a large set of possible grasp points, pre-computed using a grasp simulator. The dataset is an integral part of a complete Robot Operating System (ROS) architecture for performing pick and place tasks. We present our current applications using this data, and discuss possible extensions and future directions for shared datasets for robot operation in unstructured settings. I. DATASETS FOR ROBOTICS RESEARCH Recent years have seen a growing consensus that one of the keys to robotic applications in unstructured environments lies in collaboration and reusable functionality. An immediate result has been the emergence of a number of platforms and frameworks for sharing operational “building blocks,” usually in the form of code modules, with functionality ranging from low-level hardware drivers to complex algorithms such as path or motion planners. By using a set of now well-established guidelines, such as stable documented interfaces and standardized communication protocols, this type of collaboration has accelerated development towards complex applications. However, a similar set of methods for sharing and reusing data has been slower to emerge. In this paper we describe our effort in producing and releasing to the community a complete architecture for performing pick-and-place tasks in unstructured (or semistructured) environments. There are two key components to this architecture: the algorithms themselves, developed using the Robot Operating System (ROS) framework, and the knowledge base that they operate on. In our case, the algorithms provide abilities such as object segmentation and recognition, motion planning with collision avoidance, grasp execution using tactile feedback, etc. The knowledge base, which is the main focus of this study, contains relevant information for object recognition and grasping for a large set of common household objects. Some of the key aspects of combining computational tools with the data that they operate on are: • other researchers will have the option of directly using our dataset over the Internet (in an open, read-only fashion), or downloading and customizing it for their own applications; • defining a stable interface to the dataset component of the release will allow other researchers to provide their own modified and/or extended versions of the data to †Willow Garage Inc., Menlo Park, CA. Email: {matei, bradski, hsiao, pbrook}@willowgarage.com ∗University of Washington, Seattle, WA. the community, knowing that it will be directly usable by anyone running the algorithmic component; • the data and algorithm components can evolve together, like any other components of a large software distribution, with well-defined and documented interfaces, version numbering and control, etc. In particular, our current dataset is available in the form of a relational database, using the SQL standard. This choice provides additional benefits, including optimized relational queries, both for using the data on-line and managing it off-line, and low-level serialization functionality for most major languages. We believe that these features can help foster collaboration as well as provide useful tools for benchmarking as we advance towards increasingly complex behavior in unstructured environments. There have been previous example of datasets released in the research community (as described for example in [3], [7], [13] to name only a few), used either for benchmarking or for data-driven algorithms. However, few of these have been accompanied by the relevant algorithms, or have offered a well-defined interface to be used for extensions. The database component of our architecture was directly inspired by the Columbia Grasp Database (CGDB) [5], [6], released together with processing software integrated with the GraspIt! simulator [9]. The CGDB contains object shape and grasp information for a very large (n = 7, 256) set of general shapes from the Princeton Shape Benchmark [12]. The dataset presented here is smaller in scope (n = 180), referring only to actual graspable objects from the real world, and is integrated with a complete manipulation pipeline on the PR2 robot. II. THE OBJECT AND GRASP DATABASE", "title": "" }, { "docid": "1a620e17048fa25cfc54f5c9fb821f39", "text": "The performance of a detector depends much on its training dataset and drops significantly when the detector is applied to a new scene due to the large variations between the source training dataset and the target scene. In order to bridge this appearance gap, we propose a deep model to automatically learn scene-specific features and visual patterns in static video surveillance without any manual labels from the target scene. It jointly learns a scene-specific classifier and the distribution of the target samples. Both tasks share multi-scale feature representations with both discriminative and representative power. We also propose a cluster layer in the deep model that utilizes the scenespecific visual patterns for pedestrian detection. Our specifically designed objective function not only incorporates the confidence scores of target training samples but also automatically weights the importance of source training samples by fitting the marginal distributions of target samples. It significantly improves the detection rates at 1 FPPI by 10% compared with the state-of-the-art domain adaptation methods on MIT Traffic Dataset and CUHK Square Dataset.", "title": "" }, { "docid": "5459dc71fd40a576365f0afced64b6b7", "text": "Cloud computing providers such as Amazon and Google have recently begun offering container-instances, which provide an efficient route to application deployment within a lightweight, isolated and well-defined execution environment. Cloud providers currently offer Container Service Platforms (CSPs), which orchestrate containerised applications. Existing CSP frameworks do not offer any form of intelligent resource scheduling: applications are usually scheduled individually, rather than taking a holistic view of all registered applications and available resources in the cloud. This can result in increased execution times for applications, resource wastage through underutilised container-instances, and a reduction in the number of applications that can be deployed, given the available resources. The research presented in this paper aims to extend existing systems by adding a cloud-based Container Management Service (CMS) framework that offers increased deployment density, scalability and resource efficiency. CMS provides additional functionalities for orchestrating containerised applications by joint optimisation of sets of containerised applications, and resource pool in multiple (geographical distributed) cloud regions. We evaluated CMS on a cloud-based CSP i.e., Amazon EC2 Container Management Service (ECS) and conducted extensive experiments using sets of CPU and Memory intensive containerised applications against the direct deployment strategy of Amazon ECS. The results show that CMS achieves up to 25% higher cluster utilisation, and up to 70% reduction in execution times.", "title": "" }, { "docid": "ca468aa680c29fb00f55e9d851676200", "text": "The class of problems involving the random generation of combinatorial structures from a uniform distribution is considered. Uniform generation problems are, in computational difficulty, intermediate between classical existence and counting problems. It is shown that exactly uniform generation of 'efficiently verifiable' combinatorial structures is reducible to approximate counting (and hence, is within the third level of the polynomial hierarchy). Natural combinatorial problems are presented which exhibit complexity gaps between their existence and generation, and between their generation and counting versions. It is further shown that for self-reducible problems, almost uniform generation and randomized approximate counting are inter-reducible, and hence, of similar complexity. CR Categories. F.I.1, F.1.3, G.2.1, G.3", "title": "" }, { "docid": "e7a0a9e31bba0eec8bf598c5e9eefe6b", "text": "Stylizing photos, to give them an antique or artistic look, has become popular in recent years. The available stylization filters, however, are usually created manually by artists, resulting in a narrow set of choices. Moreover, it can be difficult for the user to select a desired filter, since the filters’ names often do not convey their functions. We investigate an approach to photo filtering in which the user provides one or more keywords, and the desired style is defined by the set of images returned by searching the web for those keywords. Our method clusters the returned images, allows the user to select a cluster, then stylizes the user’s photos by transferring vignetting, color, and local contrast from that cluster. This approach vastly expands the range of available styles, and gives each filter a meaningful name by default. We demonstrate that our method is able to robustly transfer a wide range of styles from image collections to users’ photos.", "title": "" }, { "docid": "c249c64b3e41cde156a63e1224ae2091", "text": "The technology of intelligent agents and multi-agent systems seems set to radically alter the way in which complex, distributed, open systems are conceptualized and implemented. The purpose of this paper is to consider the problem of building a multi-agent system as a software engineering enterprise. The article focuses on three issues: (i) how agents might be specified; (ii) how these specifications might be refined or otherwise transformed into efficient implementations; and (iii) how implemented agents and multi-agent systems might subsequently be verified, in order to show that they are correct with respect to their specifications. These issues are discussed with reference to a number of casestudies. The article concludes by setting out some issues and open problems for future", "title": "" }, { "docid": "f534a356d309fc6625fa3baa070e803a", "text": "Neural networks have been successfully applied in applications with a large amount of labeled data. However, the task of rapid generalization on new concepts with small training data while preserving performances on previously learned ones still presents a significant challenge to neural network models. In this work, we introduce a novel meta learning method, Meta Networks (MetaNet), that learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization. When evaluated on Omniglot and Mini-ImageNet benchmarks, our MetaNet models achieve a near human-level performance and outperform the baseline approaches by up to 6% accuracy. We demonstrate several appealing properties of MetaNet relating to generalization and continual learning.", "title": "" } ]
scidocsrr
ebdd1187acfaade03515728ec857b9af
Efficient frontier detection for robot exploration
[ { "docid": "77908ab362e0a26e395bc2d2bf07e0ee", "text": "In this paper we consider the problem of exploring an unknown environment by a team of robots. As in single-robot exploration the goal is to minimize the overall exploration time. The key problem to be solved therefore is to choose appropriate target points for the individual robots so that they simultaneously explore different regions of their environment. We present a probabilistic approach for the coordination of multiple robots which, in contrast to previous approaches, simultaneously takes into account the costs of reaching a target point and the utility of target points. The utility of target points is given by the size of the unexplored area that a robot can cover with its sensors upon reaching a target position. Whenever a target point is assigned to a specific robot, the utility of the unexplored area visible from this target position is reduced for the other robots. This way, a team of multiple robots assigns different target points to the individual robots. The technique has been implemented and tested extensively in real-world experiments and simulation runs. The results given in this paper demonstrate that our coordination technique significantly reduces the exploration time compared to previous approaches. '", "title": "" }, { "docid": "83981d52eb5e58d6c2d611b25c9f6d12", "text": "This tutorial provides an introduction to Simultaneous Localisation and Mapping (SLAM) and the extensive research on SLAM that has been undertaken over the past decade. SLAM is the process by which a mobile robot can build a map of an environment and at the same time use this map to compute it’s own location. The past decade has seen rapid and exciting progress in solving the SLAM problem together with many compelling implementations of SLAM methods. Part I of this tutorial (this paper), describes the probabilistic form of the SLAM problem, essential solution methods and significant implementations. Part II of this tutorial will be concerned with recent advances in computational methods and new formulations of the SLAM problem for large scale and complex environments.", "title": "" } ]
[ { "docid": "97a6a77cfa356636e11e02ffe6fc0121", "text": "© 2019 Muhammad Burhan Hafez et al., published by De Gruyter. This work is licensed under the Creative CommonsAttribution-NonCommercial-NoDerivs4.0License. Paladyn, J. Behav. Robot. 2019; 10:14–29 Research Article Open Access Muhammad Burhan Hafez*, Cornelius Weber, Matthias Kerzel, and Stefan Wermter Deep intrinsically motivated continuous actor-critic for eflcient robotic visuomotor skill learning https://doi.org/10.1515/pjbr-2019-0005 Received June 6, 2018; accepted October 29, 2018 Abstract: In this paper, we present a new intrinsically motivated actor-critic algorithm for learning continuous motor skills directly from raw visual input. Our neural architecture is composed of a critic and an actor network. Both networks receive the hidden representation of a deep convolutional autoencoder which is trained to reconstruct the visual input, while the centre-most hidden representation is also optimized to estimate the state value. Separately, an ensemble of predictive world models generates, based on its learning progress, an intrinsic reward signal which is combined with the extrinsic reward to guide the exploration of the actor-critic learner. Our approach is more data-efficient and inherently more stable than the existing actor-critic methods for continuous control from pixel data. We evaluate our algorithm for the task of learning robotic reaching and grasping skills on a realistic physics simulator and on a humanoid robot. The results show that the control policies learnedwith our approach can achieve better performance than the compared state-of-the-art and baseline algorithms in both dense-reward and challenging sparse-reward settings.", "title": "" }, { "docid": "e86247471d4911cb84aa79911547045b", "text": "Creating rich representations of environments requires integration of multiple sensing modalities with complementary characteristics such as range and imaging sensors. To precisely combine multisensory information, the rigid transformation between different sensor coordinate systems (i.e., extrinsic parameters) must be estimated. The majority of existing extrinsic calibration techniques require one or multiple planar calibration patterns (such as checkerboards) to be observed simultaneously from the range and imaging sensors. The main limitation of these approaches is that they require modifying the scene with artificial targets. In this paper, we present a novel algorithm for extrinsically calibrating a range sensor with respect to an image sensor with no requirement of external artificial targets. The proposed method exploits natural linear features in the scene to precisely determine the rigid transformation between the coordinate frames. First, a set of 3D lines (plane intersection and boundary line segments) are extracted from the point cloud, and a set of 2D line segments are extracted from the image. Correspondences between the 3D and 2D line segments are used as inputs to an optimization problem which requires jointly estimating the relative translation and rotation between the coordinate frames. The proposed method is not limited to any particular types or configurations of sensors. To demonstrate robustness, efficiency and generality of the presented algorithm, we include results using various sensor configurations.", "title": "" }, { "docid": "526238c8369bb37048f3165b2ace0d15", "text": "With their exceptional interactive and communicative capabilities, Online Social Networks (OSNs) allow destinations and companies to heighten their brand awareness. Many tourist destinations and hospitality brands are exploring the use of OSNs to form brand awareness and generate positive WOM. The purpose of this research is to propose and empirically test a theory-driven model of brand awareness in OSNs. A survey among 230 OSN users was deployed to test the theoretical model. The data was analyzed using SEM. Study results indicate that building brand awareness in OSNs increases WOM traffic. In order to foster brand awareness in OSN, it is important to create a virtually interactive environment, enabling users to exchange reliable, rich and updated information in a timely manner. Receiving financial and/or psychological rewards and accessing exclusive privileges in OSNs are important factors for users. Both system quality and information quality were found to be important precursors of brand awareness in OSNs. Study results support the importance of social media in online branding strategies. Virtual interactivity, system quality, information content quality, and rewarding activities influence and generate brand awareness, which in return, triggers WOM. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "824b0e8a66699965899169738df7caa9", "text": "Much recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. In this paper we investigate whether this direct approach succeeds due to, or despite, the fact that it avoids the explicit representation of high-level information. We propose a method of incorporating high-level concepts into the successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. We achieve the best reported results on both image captioning and VQA on several benchmark datasets, and provide an analysis of the value of explicit high-level concepts in V2L problems.", "title": "" }, { "docid": "51da24a6bdd2b42c68c4465624d2c344", "text": "Hashing based Approximate Nearest Neighbor (ANN) search has attracted much attention due to its fast query time and drastically reduced storage. However, most of the hashing methods either use random projections or extract principal directions from the data to derive hash functions. The resulting embedding suffers from poor discrimination when compact codes are used. In this paper, we propose a novel data-dependent projection learning method such that each hash function is designed to correct the errors made by the previous one sequentially. The proposed method easily adapts to both unsupervised and semi-supervised scenarios and shows significant performance gains over the state-ofthe-art methods on two large datasets containing up to 1 million points.", "title": "" }, { "docid": "2c6332afec6a2c728041e0325a27fcbf", "text": "Today’s social networks are plagued by numerous types of malicious profiles which can range from socialbots to sexual predators. We present a novel method for the detection of these malicious profiles by using the social network’s own topological features only. Reliance on these features alone ensures that the proposed method is generic enough to be applied on a range of social networks. The algorithm has been evaluated on several social networks and was found to be effective in detecting various types of malicious profiles. We believe this method is a valuable step in the increasing battle against social network spammers, socialbots, and sexual predictors.", "title": "" }, { "docid": "26af6b4795e1864a63da17231651960c", "text": "In 2020, 146,063 deaths due to pancreatic cancer are estimated to occur in Europe and the United States combined. To identify common susceptibility alleles, we performed the largest pancreatic cancer GWAS to date, including 9040 patients and 12,496 controls of European ancestry from the Pancreatic Cancer Cohort Consortium (PanScan) and the Pancreatic Cancer Case-Control Consortium (PanC4). Here, we find significant evidence of a novel association at rs78417682 (7p12/TNS3, P = 4.35 × 10−8). Replication of 10 promising signals in up to 2737 patients and 4752 controls from the PANcreatic Disease ReseArch (PANDoRA) consortium yields new genome-wide significant loci: rs13303010 at 1p36.33 (NOC2L, P = 8.36 × 10−14), rs2941471 at 8q21.11 (HNF4G, P = 6.60 × 10−10), rs4795218 at 17q12 (HNF1B, P = 1.32 × 10−8), and rs1517037 at 18q21.32 (GRP, P = 3.28 × 10−8). rs78417682 is not statistically significantly associated with pancreatic cancer in PANDoRA. Expression quantitative trait locus analysis in three independent pancreatic data sets provides molecular support of NOC2L as a pancreatic cancer susceptibility gene. Genetic variants associated with susceptibility to pancreatic cancer have been identified using genome wide association studies (GWAS). Here, the authors combine data from over 9000 patients and perform a meta-analysis to identify five novel loci linked to pancreatic cancer.", "title": "" }, { "docid": "127406000c2ede6517513bfa21747431", "text": "These are exciting times for cancer immunotherapy. After many years of disappointing results, the tide has finally changed and immunotherapy has become a clinically validated treatment for many cancers. Immunotherapeutic strategies include cancer vaccines, oncolytic viruses, adoptive transfer of ex vivo activated T and natural killer cells, and administration of antibodies or recombinant proteins that either costimulate cells or block the so-called immune checkpoint pathways. The recent success of several immunotherapeutic regimes, such as monoclonal antibody blocking of cytotoxic T lymphocyte-associated protein 4 (CTLA-4) and programmed cell death protein 1 (PD1), has boosted the development of this treatment modality, with the consequence that new therapeutic targets and schemes which combine various immunological agents are now being described at a breathtaking pace. In this review, we outline some of the main strategies in cancer immunotherapy (cancer vaccines, adoptive cellular immunotherapy, immune checkpoint blockade, and oncolytic viruses) and discuss the progress in the synergistic design of immune-targeting combination therapies.", "title": "" }, { "docid": "86d8a61771cd14a825b6fc652f77d1d6", "text": "The widespread of adult content on online social networks (e.g., Twitter) is becoming an emerging yet critical problem. An automatic method to identify accounts spreading sexually explicit content (i.e., adult account) is of significant values in protecting children and improving user experiences. Traditional adult content detection techniques are ill-suited for detecting adult accounts on Twitter due to the diversity and dynamics in Twitter content. In this paper, we formulate the adult account detection as a graph based classification problem and demonstrate our detection method on Twitter by using social links between Twitter accounts and entities in tweets. As adult Twitter accounts are mostly connected with normal accounts and post many normal entities, which makes the graph full of noisy links, existing graph based classification techniques cannot work well on such a graph. To address this problem, we propose an iterative social based classifier (ISC), a novel graph based classification technique resistant to the noisy links. Evaluations using large-scale real-world Twitter data show that, by labeling a small number of popular Twitter accounts, ISC can achieve satisfactory performance in adult account detection, significantly outperforming existing techniques.", "title": "" }, { "docid": "86d725fa86098d90e5e252c6f0aaab3c", "text": "This paper illustrates the manner in which UML can be used to study mappings to different types of database systems. After introducing UML through a comparison to the EER model, UML diagrams are used to teach different approaches for mapping conceptual designs to the relational model. As we cover object-oriented and object-relational database systems, different features of UML are used over the same enterprise example to help students understand mapping alternatives for each model. Students are required to compare and contrast the mappings in each model as part of the learning process. For object-oriented and object-relational database systems, we address mappings to the ODMG and SQL99 standards in addition to specific commercial implementations.", "title": "" }, { "docid": "888c4bc9f1ca4402f1f56bde657c5fbe", "text": "This paper presents a comprehensive survey of existing authentication and privacy-preserving schemes for 4G and 5G cellular networks. We start by providing an overview of existing surveys that deal with 4G and 5G communications, applications, standardization, and security. Then, we give a classification of threat models in 4G and 5G cellular networks in four categories, including, attacks against privacy, attacks against integrity, attacks against availability, and attacks against authentication. We also provide a classification of countermeasures into three types of categories, including, cryptography methods, humans factors, and intrusion detection methods. The countermeasures and informal and formal security analysis techniques used by the authentication and privacy preserving schemes are summarized in form of tables. Based on the categorization of the authentication and privacy models, we classify these schemes in seven types, including, handover authentication with privacy, mutual authentication with privacy, RFID authentication with privacy, deniable authentication with privacy, authentication with mutual anonymity, authentication and key agreement with privacy, and three-factor authentication with privacy. In addition, we provide a taxonomy and comparison of authentication and privacypreserving schemes for 4G and 5G cellular networks in form of tables. Based on the current survey, several recommendations for further research are discussed at the end of this paper.", "title": "" }, { "docid": "6ff51eea5a590996ed0219a4991d32f2", "text": "The number R(4, 3, 3) is often presented as the unknown Ramsey number with the best chances of being found “soon”. Yet, its precise value has remained unknown for almost 50 years. This paper presents a methodology based on abstraction and symmetry breaking that applies to solve hard graph edge-coloring problems. The utility of this methodology is demonstrated by using it to compute the value R(4, 3, 3) = 30. Along the way it is required to first compute the previously unknown set ℛ ( 3 , 3 , 3 ; 13 ) $\\mathcal {R}(3,3,3;13)$ consisting of 78,892 Ramsey colorings.", "title": "" }, { "docid": "2d7ff73a3fb435bd11633f650b23172e", "text": "This study determined the effect of Tetracarpidium conophorum (black walnut) leaf extract on the male reproductive organs of albino rats. The effects of the leaf extracts were determined on the Epididymal sperm concentration, Testicular histology, and on testosterone concentration in the rat serum by a micro plate enzyme immunoassay (Testosterone assay). A total of sixteen (16) male albino wistar rats were divided into four (1, 2, 3 and 4) groups of four rats each. Group 1 served as the control and was fed with normal diet only, while groups 2, 3 and 4 were fed with 200, 400 and 600 mg/kg body weight (BW) of the extract for a period of two weeks. The Epididymal sperm concentration were not significantly affected (p>0.05) across the groups. The level of testosterone for the treatment groups 2 and 4 showed no significant difference (p>0.05) compared to the control while group 4 showed significant increase compared to that of the control (p<0.05). Pathologic changes were observed in testicular histology across the treatment groups. Robust seminiferous tubular lumen containing sperm cells and increased production of Leydig cells and Sertoli cells were observed across different treatment groups compared to that of the control.", "title": "" }, { "docid": "30e0918ec670bdab298f4f5bb59c3612", "text": "Consider a single hard disk drive (HDD) composed of rotating platters and a single magnetic head. We propose a simple internal coding framework for HDDs that uses coding across drive blocks to reduce average block seek times. In particular, instead of the HDD controller seeking individual blocks, the drive performs coded-seeking: It seeks the closest subset of coded blocks, where a coded block contains partial information from multiple uncoded blocks. Coded-seeking is a tool that relaxes the scheduling of a full traveling salesman problem (TSP) on an HDD into a k-TSP. This may provide opportunities for new scheduling algorithms and to reduce average read times.", "title": "" }, { "docid": "86052e2fc8f89b91f274a607531f536e", "text": "Existing approaches to analyzing the asymptotics of graph Laplacians typically assume a well-behaved kernel function with smoothness assumptions. We remove the smoothness assumption and generalize the analysis of graph Laplacians to include previously unstudied graphs including kNN graphs. We also introduce a kernel-free framework to analyze graph constructions with shrinking neighborhoods in general and apply it to analyze locally linear embedding (LLE). We also describe how, for a given limit operator, desirable properties such as a convergent spectrum and sparseness can be achieved by choosing the appropriate graph construction.", "title": "" }, { "docid": "a024f33090621555f2d5e3aadeac0265", "text": "Recent efforts to understand the mechanisms underlying human cooperation have focused on the notion of trust, with research illustrating that both initial impressions and previous interactions impact the amount of trust people place in a partner. Less is known, however, about how these two types of information interact in iterated exchanges. The present study examined how implicit initial trustworthiness information interacts with experienced trustworthiness in a repeated Trust Game. Consistent with our hypotheses, these two factors reliably influence behavior both independently and synergistically, in terms of how much money players were willing to entrust to their partner and also in their post-game subjective ratings of trustworthiness. To further understand this interaction, we used Reinforcement Learning models to test several distinct processing hypotheses. These results suggest that trustworthiness is a belief about probability of reciprocation based initially on implicit judgments, and then dynamically updated based on experiences. This study provides a novel quantitative framework to conceptualize the notion of trustworthiness.", "title": "" }, { "docid": "096b2ffac795053e046c25f1e8697fcf", "text": "Background\nThe benefit of computer-assisted planning in orthognathic surgery (OGS) has been extensively documented over the last decade. This study aimed to evaluate the accuracy of three-dimensional (3D) virtual planning in surgery-first OGS.\n\n\nMethods\nFifteen patients with skeletal class III malocclusion who underwent bimaxillary OGS with surgery-first approach were included. A composite skull model was reconstructed using data from cone-beam computed tomography and stereolithography from a scanned dental cast. Surgical procedures were simulated using Simplant O&O software, and the virtual plan was transferred to the operation room using 3D-printed splints. Differences of the 3D measurements between the virtual plan and postoperative results were evaluated, and the accuracy was reported using root mean square deviation (RMSD) and the Bland-Altman method.\n\n\nResults\nThe virtual planning was successfully transferred to surgery. The overall mean linear difference was 0.88 mm (0.79 mm for the maxilla and 1 mm for the mandible), and the overall mean angular difference was 1.16°. The RMSD ranged from 0.86 to 1.46 mm and 1.27° to 1.45°, within the acceptable clinical criteria.\n\n\nConclusion\nIn this study, virtual surgical planning and 3D-printed surgical splints facilitated the diagnosis and treatment planning, and offered an accurate outcome in surgery-first OGS.", "title": "" }, { "docid": "4df52d891c63975a1b9d4cd6c74571db", "text": "DDoS attacks have been a persistent threat to network availability for many years. Most of the existing mitigation techniques attempt to protect against DDoS by filtering out attack traffic. However, as critical network resources are usually static, adversaries are able to bypass filtering by sending stealthy low traffic from large number of bots that mimic benign traffic behavior. Sophisticated stealthy attacks on critical links can cause a devastating effect such as partitioning domains and networks. In this paper, we propose to defend against DDoS attacks by proactively changing the footprint of critical resources in an unpredictable fashion to invalidate an adversary's knowledge and plan of attack against critical network resources. Our present approach employs virtual networks (VNs) to dynamically reallocate network resources using VN placement and offers constant VN migration to new resources. Our approach has two components: (1) a correct-by-construction VN migration planning that significantly increases the uncertainty about critical links of multiple VNs while preserving the VN placement properties, and (2) an efficient VN migration mechanism that identifies the appropriate configuration sequence to enable node migration while maintaining the network integrity (e.g., avoiding session disconnection). We formulate and implement this framework using SMT logic. We also demonstrate the effectiveness of our implemented framework on both PlanetLab and Mininet-based experimentations.", "title": "" }, { "docid": "686abc74c0a34c90755d20c0ffc63eb2", "text": "Bayesian estimation for 2 groups provides complete distributions of credible values for the effect size, group means and their difference, standard deviations and their difference, and the normality of the data. The method handles outliers. The decision rule can accept the null value (unlike traditional t tests) when certainty in the estimate is high (unlike Bayesian model comparison using Bayes factors). The method also yields precise estimates of statistical power for various research goals. The software and programs are free and run on Macintosh, Windows, and Linux platforms.", "title": "" }, { "docid": "c5e401fe1b2a65677b93ae3e8aa60e18", "text": "In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.", "title": "" } ]
scidocsrr
14beda2a2c57c76fabd3aa8e14d47193
Loving-kindness meditation for posttraumatic stress disorder: a pilot study.
[ { "docid": "ed0736d1f8c35ec8b0c2f5bb9adfb7f9", "text": "Neff's (2003a, 2003b) notion of self-compassion emphasizes kindness towards one's self, a feeling of connectedness with others, and mindful awareness of distressing experiences. Because exposure to trauma and subsequent posttraumatic stress symptoms (PSS) may be associated with self-criticism and avoidance of internal experiences, the authors examined the relationship between self-compassion and PSS. Out of a sample of 210 university students, 100 endorsed experiencing a Criterion A trauma. Avoidance symptoms significantly correlated with self-compassion, but reexperiencing and hyperarousal did not. Individuals high in self-compassion may engage in less avoidance strategies following trauma exposure, allowing for a natural exposure process.", "title": "" } ]
[ { "docid": "96e56dcf3d38c8282b5fc5c8ae747a66", "text": "The solid-state transformer (SST) was conceived as a replacement for the conventional power transformer, with both lower volume and weight. The smart transformer (ST) is an SST that provides ancillary services to the distribution and transmission grids to optimize their performance. Hence, the focus shifts from hardware advantages to functionalities. One of the most desired functionalities is the dc connectivity to enable a hybrid distribution system. For this reason, the ST architecture shall be composed of at least two power stages. The standard design procedure for this kind of system is to design each power stage for the maximum load. However, this design approach might limit additional services, like the reactive power compensation on the medium voltage (MV) side, and it does not consider the load regulation capability of the ST on the low voltage (LV) side. If the SST is tailored to the services that it shall provide, different stages will have different designs, so that the ST is no longer a mere application of the SST but an entirely new subject.", "title": "" }, { "docid": "e5f084c72109f869c54f402237f84907", "text": "As former Fermatist, the author tried many times to prove Fermat’s Last Theorem in an elementary way. Just few insights of the proposed schemes partially passed the peer-reviewing and they motivated the subsequent fruitful collaboration with Prof. Mario De Paz. Among the author’s failures, there is an unpublished proof emblematic of the FLT’s charming power for the suggestive circumstances it was formulated. As sometimes happens with similar erroneous attempts, containing out-of-context hints, it provides a germinal approach to power sums yet to be refined.", "title": "" }, { "docid": "74290ff01b32423087ce0025625dc445", "text": "niques is now the world champion computer program in the game of Contract Bridge. As reported in The New York Times and The Washington Post, this program—a new version of Great Game Products’ BRIDGE BARON program—won the Baron Barclay World Bridge Computer Challenge, an international competition hosted in July 1997 by the American Contract Bridge League. It is well known that the game tree search techniques used in computer programs for games such as Chess and Checkers work differently from how humans think about such games. In contrast, our new version of the BRIDGE BARON emulates the way in which a human might plan declarer play in Bridge by using an adaptation of hierarchical task network planning. This article gives an overview of the planning techniques that we have incorporated into the BRIDGE BARON and discusses what the program’s victory signifies for research on AI planning and game playing.", "title": "" }, { "docid": "b2911f3df2793066dde1af35f5a09d62", "text": "Cloud computing is drawing attention from both practitioners and researchers, and its adoption among organizations is on the rise. The focus has mainly been on minimizing fixed IT costs and using the IT resource flexibility offered by the cloud. However, the promise of cloud computing is much greater. As a disruptive technology, it enables innovative new services and business models that decrease time to market, create operational efficiencies and engage customers and citizens in new ways. However, we are still in the early days of cloud computing, and, for organizations to exploit the full potential, we need knowledge of the potential applications and pitfalls of cloud computing. Maturity models provide effective methods for organizations to assess, evaluate, and benchmark their capabilities as bases for developing roadmaps for improving weaknesses. Adopting the business-IT maturity model by Pearlson & Saunders (2007) as analytical framework, we synthesize the existing literature, identify levels of cloud computing benefits, and establish propositions for practice in terms of how to realize these benefits.", "title": "" }, { "docid": "32a597647795a7333b82827b55c209c9", "text": "This study investigates the relationship between the extent to which employees have opportunities to voice dissatisfaction and voluntary turnover in 111 short-term, general care hospitals. Results show that, whether or not a union is present, high numbers of mechanisms for employee voice are associated with high retention rates. Implications for theory and research as well as management practice are discussed.", "title": "" }, { "docid": "5793b2b2edbcb1443be7de07406f0fd2", "text": "Question answering is a complex and valuable task in natural language processing and artificial intelligence. Several deep learning models having already been proposed to solve it. In this work, we propose a deep learning model with an attention mechanism that is based on a previous work and a decoder that incorporates a wide summary of the context and question. That summary includes a condensed representation of the question, a context paragraph representation previous created by the model, as well as positional question summaries created by the attention mechanism. We demonstrate that a strong attention layer allows a deep learning model to do well even on long questions and context paragraphs in addition to contributing significantly to model performance.", "title": "" }, { "docid": "af952f9368761c201c5dfe4832686e87", "text": "The field of service design is expanding rapidly in practice, and a body of formal research is beginning to appear to which the present article makes an important contribution. As innovations in services develop, there is an increasing need not only for research into emerging practices and developments but also into the methods that enable, support and promote such unfolding changes. This article tackles this need directly by referring to a large design research project, and performing a related practicebased inquiry into the co-design and development of methods for fostering service design in organizations wishing to improve their service offerings to customers. In particular, with reference to a funded four-year research project, one aspect is elaborated on that uses cards as a method to focus on the importance and potential of touch-points in service innovation. Touch-points are one of five aspects in the project that comprise a wider, integrated model and means for implementing innovations in service design. Touch-points are the points of contact between a service provider and customers. A customer might utilise many different touch-points as part of a use scenario (often called a customer journey). For example, a bank’s touch points include its physical buildings, web-site, physical print-outs, self-service machines, bank-cards, customer assistants, call-centres, telephone assistance etc. Each time a person relates to, or interacts with, a touch-point, they have a service-encounter. This gives an experience and adds something to the person’s relationship with the service and the service provider. The sum of all experiences from touch-point interactions colours their opinion of the service (and the service provider). Touch-points are one of the central aspects of service design. A commonly used definition of service design is “Design for experiences that happen over time and across different touchpoints” (ServiceDesign.org). As this definition shows, touchpoints are often cited as one of the major elements of service", "title": "" }, { "docid": "80e6a7287c6da44387ceb3938dedb509", "text": "By taking advantage of the elevation domain, three-dimensional (3-D) multiple input and multiple output (MIMO) with massive antenna elements is considered as a promising and practical technique for the fifth Generation mobile communication system. So far, 3-D MIMO is mostly studied by simulation and a few field trials have been launched recently. It still remains unknown how much does the 3-D MIMO meet our expectations in versatile scenarios. In this paper, we answer this based on measurements with $56\\times 32$ antenna elements at 3.5 GHz with 100-MHz bandwidth in three typical deployment scenarios, including outdoor to indoor (O2I), urban microcell (UMi), and urban macrocell (UMa). Each scenario contains two different site locations and 2–5 test routes under the same configuration. Based on the measured data, both elevation and azimuth angles are extracted and their stochastic behaviors are investigated. Then, we reconstruct two-dimensional and 3-D MIMO channels based on the measured data, and compare the capacity and eigenvalues distribution. It is observed that 3-D MIMO channel which fully utilizes the elevation domain does improve capacity and also enhance the contributing eigenvalue number. However, this gain varies from scenario to scenario in reality, O2I is the most beneficial scenario, then followed by UMi and UMa scenarios. More results of multiuser capacity varying with the scenario, antenna number and user number can provide the experimental insights for the efficient utilization of 3-D MIMO in future.", "title": "" }, { "docid": "a839016be99c3cb93d30fa48403086d8", "text": "At synapses of the mammalian central nervous system, release of neurotransmitter occurs at rates transiently as high as 100 Hz, putting extreme demands on nerve terminals with only tens of functional vesicles at their disposal. Thus, the presynaptic vesicle cycle is particularly critical to maintain neurotransmission. To understand vesicle cycling at the most fundamental level, we studied single vesicles undergoing exo/endocytosis and tracked the fate of newly retrieved vesicles. This was accomplished by minimally stimulating boutons in the presence of the membrane-fluorescent styryl dye FM1-43, then selecting for terminals that contained only one dye-filled vesicle. We then observed the kinetics of dye release during single action potential stimulation. We found that most vesicles lost only a portion of their total dye during a single fusion event, but were able to fuse again soon thereafter. We interpret this as direct evidence of \"kiss-and-run\" followed by rapid reuse. Other interpretations such as \"partial loading\" and \"endosomal splitting\" were largely excluded on the basis of multiple lines of evidence. Our data placed an upper bound of <1.4 s on the lifetime of the kiss-and-run fusion event, based on the assumption that aqueous departitioning is rate limiting. The repeated use of individual vesicles held over a range of stimulus frequencies up to 30 Hz and was associated with neurotransmitter release. A small percentage of fusion events did release a whole vesicle's worth of dye in one action potential, consistent with a classical picture of exocytosis as fusion followed by complete collapse or at least very slow retrieval.", "title": "" }, { "docid": "5056c2a6f132c25e4b0ff1a79c72f508", "text": "The proliferation of Bluetooth Low-Energy (BLE) chipsets on mobile devices has lead to a wide variety of user-installable tags and beacons designed for location-aware applications. In this paper, we present the Acoustic Location Processing System (ALPS), a platform that augments BLE transmitters with ultrasound in a manner that improves ranging accuracy and can help users configure indoor localization systems with minimal effort. A user places three or more beacons in an environment and then walks through a calibration sequence with their mobile device where they touch key points in the environment like the floor and the corners of the room. This process automatically computes the room geometry as well as the precise beacon locations without needing auxiliary measurements. Once configured, the system can track a user's location referenced to a map.\n The platform consists of time-synchronized ultrasonic transmitters that utilize the bandwidth just above the human hearing limit, where mobile devices are still sensitive and can detect ranging signals. To aid in the mapping process, the beacons perform inter-beacon ranging during setup. Each beacon includes a BLE radio that can identify and trigger the ultrasonic signals. By using differences in propagation characteristics between ultrasound and radio, the system can classify if beacons are within Line-Of-Sight (LOS) to the mobile phone. In cases where beacons are blocked, we show how the phone's inertial measurement sensors can be used to supplement localization data. We experimentally evaluate that our system can estimate three-dimensional beacon location with a Euclidean distance error of 16.1cm, and can generate maps with room measurements with a two-dimensional Euclidean distance error of 19.8cm. When tested in six different environments, we saw that the system can identify Non-Line-Of-Sight (NLOS) signals with over 80% accuracy and track a user's location to within less than 100cm.", "title": "" }, { "docid": "facc1845ddde1957b2c1b74a62d74261", "text": "The large availability of user provided contents on online social media facilitates people aggregation around shared beliefs, interests, worldviews and narratives. In spite of the enthusiastic rhetoric about the so called collective intelligence unsubstantiated rumors and conspiracy theories-e.g., chemtrails, reptilians or the Illuminati-are pervasive in online social networks (OSN). In this work we study, on a sample of 1.2 million of individuals, how information related to very distinct narratives-i.e. main stream scientific and conspiracy news-are consumed and shape communities on Facebook. Our results show that polarized communities emerge around distinct types of contents and usual consumers of conspiracy news result to be more focused and self-contained on their specific contents. To test potential biases induced by the continued exposure to unsubstantiated rumors on users' content selection, we conclude our analysis measuring how users respond to 4,709 troll information-i.e. parodistic and sarcastic imitation of conspiracy theories. We find that 77.92% of likes and 80.86% of comments are from users usually interacting with conspiracy stories.", "title": "" }, { "docid": "8e3bf062119c6de9fa5670ce4b00764b", "text": "Heating red phosphorus in sealed ampoules in the presence of a Sn/SnI4 catalyst mixture has provided bulk black phosphorus at much lower pressures than those required for allotropic conversion by anvil cells. Herein we report the growth of ultra-long 1D red phosphorus nanowires (>1 mm) selectively onto a wafer substrate from red phosphorus powder and a thin film of red phosphorus in the present of a Sn/SnI4 catalyst. Raman spectra and X-ray diffraction characterization suggested the formation of crystalline red phosphorus nanowires. FET devices constructed with the red phosphorus nanowires displayed a typical I-V curve similar to that of black phosphorus and a similar mobility reaching 300 cm(2)  V(-1)  s with an Ion /Ioff ratio approaching 10(2) . A significant response to infrared light was observed from the FET device.", "title": "" }, { "docid": "b6249dbd61928a0722e0bcbf18cd9f79", "text": "For many applications such as tele-operational robots and interactions with virtual environments, it is better to have performance with force feedback than without. Haptic devices are force reflecting interfaces. They can also track human hand positions simultaneously. A new 6 DOF (degree-of-freedom) haptic device was designed and calibrated in this study. It mainly contains a double parallel linkage, a rhombus linkage, a rotating mechanical structure and a grasping interface. Benefited from the unique design, it is a hybrid structure device with a large workspace and high output capability. Therefore, it is capable of multi-finger interactions. Moreover, with an adjustable base, operators can change different postures without interrupting haptic tasks. To investigate the performance regarding position tracking accuracy and static output forces, we conducted experiments on a three-dimensional electric sliding platform and a digital force gauge, respectively. Displacement errors and force errors are calculated and analyzed. To identify the capability and potential of the device, four application examples were programmed.", "title": "" }, { "docid": "03dcb05a6aa763b6b0a5cdc58ddb81d8", "text": "In this paper, a phase-shifted dual H-bridge converter, which can solve the drawbacks of existing phase-shifted full-bridge converters such as narrow zero-voltage-switching (ZVS) range, large circulating current, large duty-cycle loss, and serious secondary-voltage overshoot and oscillation, is analyzed and evaluated. The proposed topology is composed of two symmetric half-bridge inverters that are placed in parallel on the primary side and are driven in a phase-shifting manner to regulate the output voltage. At the rectifier stage, a center-tap-type rectifier with two additional low-current-rated diodes is employed. This structure allows the proposed converter to have the advantages of a wide ZVS range, no problems related to duty-cycle loss, no circulating current, and the reduction of secondary-voltage oscillation and overshoot. Moreover, the output filter's size becomes smaller compared to the conventional phase-shift full-bridge converters. This paper describes the operation principle of the proposed converter and the analysis and design consideration in depth. A 1-kW 320-385-V input 50-V output laboratory prototype operating at a 100-kHz switching frequency is designed, built, and tested to verify the effectiveness of the presented converter.", "title": "" }, { "docid": "c5ff79665033fd215411069cb860d641", "text": "This paper presents a new geometry-based method to determine if a cable-driven robot operating in a d-degree-of-freedom workspace (2 ≤ d ≤ 6) with n ≥ d cables can generate a given set of wrenches in a given pose, considering acceptable minimum and maximum tensions in the cables. To this end, the fundamental nature of the Available Wrench Set is studied. The latter concept, defined here, is closely related to similar sets introduced in [23, 4]. It is shown that the Available Wrench Set can be represented mathematically by a zonotope, a special class of convex polytopes. Using the properties of zonotopes, two methods to construct the Available Wrench Set are discussed. From the representation of the Available Wrench Set, computationallyefficient and non-iterative tests are presented to verify if this set includes the Task Wrench Set, the set of wrenches needed for a given task. INTRODUCTION AND PROBLEM DEFINITION A cable-driven robot, or simply cable robot, is a parallel robot whose actuated limbs are cables. The length of the cables can be adjusted in a coordinated manner to control the pose (position and orientation) and/or wrench (force and torque) at the moving platform. Pioneer applications of such mechanisms are the NIST Robocrane [1], the Falcon high-speed manipulator [15] and the Skycam [7]. The fact that cables can only exert efforts in one direction impacts the capability of the mechanism to generate wrenches at the platform. Previous work already presented methods to test if a set of wrenches – ranging from one to all possible wrenches – could be generated by a cable robot in a given pose, considering that cables work only in tension. Some of the proposed methods focus on fully constrained cable robots while others apply to unconstrained robots. In all cases, minimum and/or maximum cable tensions is considered. A complete section of this paper is dedicated to the comparison of the proposed approach with previous methods. A general geometric approach that addresses all possible cases without using an iterative algorithm is presented here. It will be shown that the results obtained with this approach are consistent with the ones previously presented in the literature [4, 5, 14, 17, 18, 22, 23, 24, 26]. This paper does not address the workspace of cable robots. The latter challenging problem was addressed in several papers over the recent years [10, 11, 12, 19, 25]. Before looking globally at the workspace, all proposed methods must go through the intermediate step of assessing the capability of a mechanism to generate a given set of wrenches. The approach proposed here is also compared with the intermediate steps of the papers on the workspace determination of cable robots. The task that a robot has to achieve implies that it will have to be able to generate a given set of wrenches in a given pose x. This Task Wrench Set, T , depends on the various applications of the considered robot, which can be for example to move a camera or other sensors [7, 6, 9, 3], manipulate payloads [15, 1] or simulate walking sensations to a user immersed in virtual reality [21], just to name a few. The Available Wrench Set, A, is the set of wrenches that the mechanism can generate. This set depends on the architecture of the robot, i.e., where the cables are attached on the platform and where the fixed winches are located. It also depends on the configuration pose as well as on the minimum and maximum acceptable tension in the cables. All the wrenches that are possibly needed to accomplish a task can 1 Copyright  2008 by ASME", "title": "" }, { "docid": "9fdd2b84fc412e03016a12d951e4be01", "text": "We examine the implications of shape on the process of finding dense correspondence and half-occlusions for a stereo pair of images. The desired property of the disparity map is that it should be a piecewise continuous function which is consistent with the images and which has the minimum number of discontinuities. To zeroth order, piecewise continuity becomes piecewise constancy. Using this approximation, we first discuss an approach for dealing with such a fronto-parallel shapeless world, and the problems involved therein. We then introduce horizontal and vertical slant to create a first order approximation to piecewise continuity. In particular, we emphasize the following geometric fact: a horizontally slanted surface (i.e., having depth variation in the direction of the separation of the two cameras) will appear horizontally stretched in one image as compared to the other image. Thus, while corresponding two images, N pixels on a scanline in one image may correspond to a different number of pixels M in the other image. This leads to three important modifications to existing stereo algorithms: (a) due to unequal sampling, existing intensity matching metrics must be modified, (b) unequal numbers of pixels in the two images must be allowed to correspond to each other, and (c) the uniqueness constraint, which is often used for detecting occlusions, must be changed to an interval uniqueness constraint. We also discuss the asymmetry between vertical and horizontal slant, and the central role of non-horizontal edges in the context of vertical slant. Using experiments, we discuss cases where existing algorithms fail, and how the incorporation of these new constraints provides correct results.", "title": "" }, { "docid": "639dad156dee50e41c05ac1c77abc3e2", "text": "Digital radiography offers the potential of improved image quality as well as providing opportunities for advances in medical image management, computer-aided diagnosis and teleradiology. Image quality is intimately linked to the precise and accurate acquisition of information from the x-ray beam transmitted by the patient, i.e. to the performance of the x-ray detector. Detectors for digital radiography must meet the needs of the specific radiological procedure where they will be used. Key parameters are spatial resolution, uniformity of response, contrast sensitivity, dynamic range, acquisition speed and frame rate. The underlying physical considerations defining the performance of x-ray detectors for radiography will be reviewed. Some of the more promising existing and experimental detector technologies which may be suitable for digital radiography will be considered. Devices that can be employed in full-area detectors and also those more appropriate for scanning x-ray systems will be discussed. These include various approaches based on phosphor x-ray converters, where light quanta are produced as an intermediate stage, as well as direct x-ray-to-charge conversion materials such as zinc cadmium telluride, amorphous selenium and crystalline silicon.", "title": "" }, { "docid": "753d840a62fc4f4b57f447afae07ba84", "text": "Feature selection has been proven to be effective and efficient in preparing high-dimensional data for data mining and machine learning problems. Since real-world data is usually unlabeled, unsupervised feature selection has received increasing attention in recent years. Without label information, unsupervised feature selection needs alternative criteria to define feature relevance. Recently, data reconstruction error emerged as a new criterion for unsupervised feature selection, which defines feature relevance as the capability of features to approximate original data via a reconstruction function. Most existing algorithms in this family assume predefined, linear reconstruction functions. However, the reconstruction function should be data dependent and may not always be linear especially when the original data is high-dimensional. In this paper, we investigate how to learn the reconstruction function from the data automatically for unsupervised feature selection, and propose a novel reconstruction-based unsupervised feature selection framework REFS, which embeds the reconstruction function learning process into feature selection. Experiments on various types of realworld datasets demonstrate the effectiveness of the proposed framework REFS.", "title": "" }, { "docid": "e8c37cb37bf9f0a34eaa5e18908e751d", "text": "Dr. Turki K. Hassan* & Enaam A. Ali* Received on: 17/5/2009 Accepted on: 16/2/2010 Abstract The purpose of this work is to study, analyze, and design a half-bridge seriesparallel resonant inverter for induction heating applications. A pulse width modulation (PWM)-based double integral sliding mode voltage controlled buck converter is proposed for control the induction heating power. This type of controller is used in order to obtain very small steady state error, stable and fast dynamic response, and robustness against variations in the line voltage and converter parameters. A small induction heating coil is designed and constructed. A carbon steel (C45) cylindrical billet is used as a load. The induction heating load parameters (RL and LL) are measured at the resonant frequency of 85 kHz. The parameters of the resonant circuit are chosen for operation at resonant. The inverter is operated at unity power factor by phased locked loop (PLL) control irrespective of load variations, with maximum current gain, and practically no voltage spikes in the switching devices at turn-off, therefore no snubber circuit is used for operation at unity power factor. A power MOSFET transistor is used as a switching device for buck converter and the IGBT transistor is used as a switching device for the inverter. A complete designed system is simulated using Matlab/Simulink. All the electronic control circuits are designed and implemented. The practical results are compared with simulation results to verify the proposed induction heating system. A close agreement between simulation and practical results is noticed and a good performance is achieved.", "title": "" }, { "docid": "d4d46f30a1e918f89948110dc9c36464", "text": "Many real-world problems involve the optimization of multiple, possibly conflicting objectives. Multi-objective reinforcement learning (MORL) is a generalization of standard reinforcement learning where the scalar reward signal is extended to multiple feedback signals, in essence, one for each objective. MORL is the process of learning policies that optimize multiple criteria simultaneously. In this paper, we present a novel temporal difference learning algorithm that integrates the Pareto dominance relation into a reinforcement learning approach. This algorithm is a multi-policy algorithm that learns a set of Pareto dominating policies in a single run. We name this algorithm Pareto Q-learning and it is applicable in episodic environments with deterministic as well as stochastic transition functions. A crucial aspect of Pareto Q-learning is the updating mechanism that bootstraps sets of Q-vectors. One of our main contributions in this paper is a mechanism that separates the expected immediate reward vector from the set of expected future discounted reward vectors. This decomposition allows us to update the sets and to exploit the learned policies consistently throughout the state space. To balance exploration and exploitation during learning, we also propose three set evaluation mechanisms. These three mechanisms evaluate the sets of vectors to accommodate for standard action selection strategies, such as -greedy. More precisely, these mechanisms use multi-objective evaluation principles such as the hypervolume measure, the cardinality indicator and the Pareto dominance relation to select the most promising actions. We experimentally validate the algorithm on multiple environments with two and three objectives and we demonstrate that Pareto Q-learning outperforms current state-of-the-art MORL algorithms with respect to the hypervolume of the obtained policies. We note that (1) Pareto Q-learning is able to learn the entire Pareto front under the usual assumption that each state-action pair is sufficiently sampled, while (2) not being biased by the shape of the Pareto front. Furthermore, (3) the set evaluation mechanisms provide indicative measures for local action selection and (4) the learned policies can be retrieved throughout the state and action space.", "title": "" } ]
scidocsrr
c85f8681d358ea0d1fc530f772a66604
Using mathematical morphology for document skew estimation
[ { "docid": "9d7df3f82d844ff74f438537bd2927b9", "text": "Several approaches have previously been taken for identify ing document image skew. At issue are efficiency, accuracy, and robustness. We work dire ctly with the image, maximizing a function of the number of ON pixels in a scanline. Image rotat i n is simulated by either vertical shear or accumulation of pixel counts along sloped lines . Pixel sum differences on adjacent scanlines reduce isotropic background noise from non-text regions. To find the skew angle, a succession of values of this function are found. Angles are chosen hierarchically, typically with both a coarse sweep and a fine angular bifurcation. To inc rease efficiency, measurements are made on subsampled images that have been pre-filtered to m aximize sensitivity to image skew. Results are given for a large set of images, includi ng multiple and unaligned text columns, graphics and large area halftones. The measured in t insic angular error is inversely proportional to the number of sampling points on a scanline. This method does not indicate when text is upside-down, and i t also requires sampling the function at 90 degrees of rotation to measure text skew in lan dscape mode. However, such text orientation can be determined (as one of four direction s) by noting that roman characters in all languages have many more ascenders than descenders, a nd using morphological operations to identify such pixels. Only a small amount of text is r equired for accurate statistical determination of orientation, and images without text are i dentified as such.", "title": "" }, { "docid": "517de02a0eff7e5bf3e913ca74f09d10", "text": "Any paper document when converted to electronic form through standard digitizing devices, like scanners, is subject to a small tilt or skew. Meanwhile, a de-skewed document allows a more compact representation of its components, particularly text objects, such as words, lines, and paragraphs, where they can be represented by their rectilinear bounding boxes. This simplified representation leads to more efficient, robust, as well as simpler algorithms for document image analysis including optical character recognition (OCR). This paper presents a new method for automatic detection of skew in a document image using mathematical morphology. The proposed algorithm is extremely fast as well as independent of script forms.", "title": "" } ]
[ { "docid": "b93919bbb2dab3a687cccb71ee515793", "text": "The processing and analysis of colour images has become an important area of study and application. The representation of the RGB colour space in 3D-polar coordinates (hue, saturation and brightness) can sometimes simplify this task by revealing characteristics not visible in the rectangular coordinate representation. The literature describes many such spaces (HLS, HSV, etc.), but many of them, having been developed for computer graphics applications, are unsuited to image processing and analysis tasks. We describe the flaws present in these colour spaces, and present three prerequisites for 3D-polar coordinate colour spaces well-suited to image processing and analysis. We then derive 3D-polar coordinate representations which satisfy the prerequisites, namely a space based on the norm which has efficient linear transform functions to and from the RGB space; and an improved HLS (IHLS) space. The most important property of this latter space is a “well-behaved” saturation coordinate which, in contrast to commonly used ones, always has a small numerical value for near-achromatic colours, and is completely independent of the brightness function. Three applications taking advantage of the good properties of the IHLS space are described: the calculation of a saturation-weighted hue mean and of saturation-weighted hue histograms, and feature extraction using mathematical morphology. 1Updated July 16, 2003. 2Jean Serra is with the Centre de Morphologie Mathématique, Ecole des Mines de Paris, 35 rue Saint-Honoré, 77305 Fontainebleau cedex, France.", "title": "" }, { "docid": "8dd3b98c6e28db1de4a473c4d576e3c5", "text": "In this paper, we address the semisupervised distance metric learning problem and its applications in classification and image retrieval. First, we formulate a semisupervised distance metric learning model by considering the metric information of inner classes and interclasses. In this model, an adaptive parameter is designed to balance the inner metrics and intermetrics by using data structure. Second, we convert the model to a minimization problem whose variable is symmetric positive-definite matrix. Third, in implementation, we deduce an intrinsic steepest descent method, which assures that the metric matrix is strictly symmetric positive-definite at each iteration, with the manifold structure of the symmetric positive-definite matrix manifold. Finally, we test the proposed algorithm on conventional data sets, and compare it with other four representative methods. The numerical results validate that the proposed method significantly improves the classification with the same computational efficiency.", "title": "" }, { "docid": "72cfe76ea68d5692731531aea02444d0", "text": "Primary human tumor culture models allow for individualized drug sensitivity testing and are therefore a promising technique to achieve personalized treatment for cancer patients. This would especially be of interest for patients with advanced stage head and neck cancer. They are extensively treated with surgery, usually in combination with high-dose cisplatin chemoradiation. However, adding cisplatin to radiotherapy is associated with an increase in severe acute toxicity, while conferring only a minor overall survival benefit. Hence, there is a strong need for a preclinical model to identify patients that will respond to the intended treatment regimen and to test novel drugs. One of such models is the technique of culturing primary human tumor tissue. This review discusses the feasibility and success rate of existing primary head and neck tumor culturing techniques and their corresponding chemo- and radiosensitivity assays. A comprehensive literature search was performed and success factors for culturing in vitro are debated, together with the actual value of these models as preclinical prediction assay for individual patients. With this review, we aim to fill a gap in the understanding of primary culture models from head and neck tumors, with potential importance for other tumor types as well.", "title": "" }, { "docid": "6f1550434a03ff0cf47c73ae9592a2f6", "text": "This paper presents focused synthetic aperture radar (SAR) processing of airborne radar sounding data acquired with the High-Capability Radar Sounder system at 60 MHz. The motivation is to improve basal reflection analysis for water detection and to improve layer detection and tracking. The processing and reflection analyses are applied to data from Kamb Ice Stream, West Antarctica. The SAR processor correlates the radar data with reference echoes from subsurface point targets. The references are 1-D responses limited by the pulse nadir footprint or 2-D responses that include echo tails. Unfocused SAR and incoherent integration are included for comparison. Echoes are accurately preserved from along-track slopes up to about 0.5deg for unfocused SAR, 3deg for 1-D correlations, and 10deg for 2-D correlations. The noise/clutter levels increase from unfocused SAR to 1-D and 2-D correlations, but additional gain compensates at the basal interface. The basal echo signal-to-noise ratio improvement is typically about 5 dB, and up to 10 dB for 2-D correlations in rough regions. The increased noise degrades the clarity of internal layers in the 2-D correlations, but detection of layers with slopes greater than 3deg is improved. Reflection coefficients are computed for basal water detection, and the results are compared for the different processing methods. There is a significant increase in the detected water from unfocused SAR to 1-D correlations, indicating that substantial basal water exists on moderately sloped interfaces. Very little additional water is detected from the 2-D correlations. The results from incoherent integration are close to the focused SAR results, but the noise/clutter levels are much greater.", "title": "" }, { "docid": "13cfc33bd8611b3baaa9be37ea9d627e", "text": "Some of the more difficult to define aspects of the therapeutic process (empathy, compassion, presence) remain some of the most important. Teaching them presents a challenge for therapist trainees and educators alike. In this study, we examine our beginning practicum students' experience of learning mindfulness meditation as a way to help them develop therapeutic presence. Through thematic analysis of their journal entries a variety of themes emerged, including the effects of meditation practice, the ability to be present, balancing being and doing modes in therapy, and the development of acceptance and compassion for themselves and for their clients. Our findings suggest that mindfulness meditation may be a useful addition to clinical training.", "title": "" }, { "docid": "3dd732828151a63d090a2633e3e48fac", "text": "This article shows the potential for convex optimization methods to be much more widely used in signal processing. In particular, automatic code generation makes it easier to create convex optimization solvers that are made much faster by being designed for a specific problem family. The disciplined convex programming framework that has been shown useful in transforming problems to a standard form may be extended to create solvers themselves. Much work remains to be done in exploring the capabilities and limitations of automatic code generation. As computing power increases, and as automatic code generation improves, the authors expect convex optimization solvers to be found more and more often in real-time signal processing applications.", "title": "" }, { "docid": "ff826e50f789d4e47f30ec22396c365d", "text": "In present Scenario of the world, Internet has almost reached to every aspect of our lives. Due to this, most of the information sharing and communication is carried out using web. With such rapid development of Internet technology, a big issue arises of unauthorized access to confidential data, which leads to utmost need of information security while transmission. Cryptography and Steganography are two of the popular techniques used for secure transmission. Steganography is more reliable over cryptography as it embeds secret data within some cover material. Unlike cryptography, Steganography is not for keeping message hidden from intruders but it does not allow anyone to know that hidden information even exist in communicated material, as the transmitted material looks like any normal message which seem to be of no use for intruders. Although, Steganography covers many types of covers to hide data like text, image, audio, video and protocols but recent developments focuses on Image Steganography due to its large data hiding capacity and difficult identification, also due to their greater scope and bulk sharing within social networks. A large number of techniques are available to hide secret data within digital images such as LSB, ISB, and MLSB etc. In this paper, a detailed review will be presented on Image Steganography and also different data hiding and security techniques using digital images with their scope and features.", "title": "" }, { "docid": "339efad8a055a90b43abebd9a4884baa", "text": "The paper presents an investigation into the role of virtual reality and web technologies in the field of distance education. Within this frame, special emphasis is given on the building of web-based virtual learning environments so as to successfully fulfill their educational objectives. In particular, basic pedagogical methods are studied, focusing mainly on the efficient preparation, approach and presentation of learning content, and specific designing rules are presented considering the hypermedia, virtual and educational nature of this kind of applications. The paper also aims to highlight the educational benefits arising from the use of virtual reality technology in medicine and study the emerging area of web-based medical simulations. Finally, an innovative virtual reality environment for distance education in medicine is demonstrated. The proposed environment reproduces conditions of the real learning process and enhances learning through a real-time interactive simulator. Keywords—Distance education, medicine, virtual reality, web.", "title": "" }, { "docid": "095f4ea337421d6e1310acf73977fdaa", "text": "We consider the problem of autonomous robotic laundry folding, and propose a solution to the perception and manipulation challenges inherent to the task. At the core of our approach is a quasi-static cloth model which allows us to neglect the complex dynamics of cloth under significant parts of the state space, allowing us to reason instead in terms of simple geometry. We present an algorithm which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers. We define parametrized fold sequences for four clothing categories: towels, pants, short-sleeved shirts, and long-sleeved shirts, each represented as polygons. We then devise a model-based optimization approach for visually inferring the class and pose of a spread-out or folded clothing article from a single image, such that the resulting polygon provides a parse suitable for these folding primitives. We test the manipulation and perception tasks individually, and combine them to implement an autonomous folding system on the Willow Garage PR2. This enables the PR2 to identify a clothing article spread out on a table, execute the computed folding sequence, and visually track its progress over successive folds.", "title": "" }, { "docid": "b4e9696cc1804bb5bcf2006ef2705b11", "text": "The conductivity of a thermal-barrier coating composed of atmospheric plasma sprayed 8 mass percent yttria partially stabilized zirconia has been measured. This coating was sprayed on a substrate of 410 stainless steel. An absolute, steady-state measurement method was used to measure thermal conductivity from 400 to 800 K. The thermal conductivity of the coating is 0.62 W/(m⋅K). This measurement has shown to be temperature independent.", "title": "" }, { "docid": "e0ba4e4b7af3cba6bed51f2f697ebe5e", "text": "In this paper, we argue that instead of solely focusing on developing efficient architectures to accelerate well-known low-precision CNNs, we should also seek to modify the network to suit the FPGA. We develop a fully automative toolflow which focuses on modifying the network through filter pruning, such that it efficiently utilizes the FPGA hardware whilst satisfying a predefined accuracy threshold. Although fewer weights are re-moved in comparison to traditional pruning techniques designed for software implementations, the overall model complexity and feature map storage is greatly reduced. We implement the AlexNet and TinyYolo networks on the large-scale ImageNet and PascalVOC datasets, to demonstrate up to roughly 2× speedup in frames per second and 2× reduction in resource requirements over the original network, with equal or improved accuracy.", "title": "" }, { "docid": "4f0274c2303560867fb1f4fe922db86f", "text": "Cerebral activation was measured with positron emission tomography in ten human volunteers. The primary auditory cortex showed increased activity in response to noise bursts, whereas acoustically matched speech syllables activated secondary auditory cortices bilaterally. Instructions to make judgments about different attributes of the same speech signal resulted in activation of specific lateralized neural systems. Discrimination of phonetic structure led to increased activity in part of Broca's area of the left hemisphere, suggesting a role for articulatory recoding in phonetic perception. Processing changes in pitch produced activation of the right prefrontal cortex, consistent with the importance of right-hemisphere mechanisms in pitch perception.", "title": "" }, { "docid": "c0d646e248f240681e36113bf0ea41a3", "text": "Existing methods for multi-domain image-to-image translation (or generation) attempt to directly map an input image (or a random vector) to an image in one of the output domains. However, most existing methods have limited scalability and robustness, since they require building independent models for each pair of domains in question. This leads to two significant shortcomings: (1) the need to train exponential number of pairwise models, and (2) the inability to leverage data from other domains when training a particular pairwise mapping. Inspired by recent work on module networks [2], this paper proposes ModularGAN for multi-domain image generation and image-to-image translation. ModularGAN consists of several reusable and composable modules that carry on different functions (e.g., encoding, decoding, transformations). These modules can be trained simultaneously, leveraging data from all domains, and then combined to construct specific GAN networks at test time, according to the specific image translation task. This leads to ModularGAN’s superior flexibility of generating (or translating to) an image in any desired domain. Experimental results demonstrate that our model not only presents compelling perceptual results but also outperforms state-of-the-art methods on multi-domain facial attribute transfer.", "title": "" }, { "docid": "99c1b5ed924012118e72475dee609b3d", "text": "Lack of trust in online transactions has been cited, by past scholars, as the main reason for the abhorrence of online shopping. In this paper we proposed a model and provided empirical evidence on the impact of the website characteristics on trust in online transactions in Indian context. In the first phase, we identified and empirically verified the relative importance of the website factors that develop online trust in India. In the next phase, we have tested the mediator effect of trust in the relationship between the website factors and purchase intention (and perceived risk). The present study for the first time provided empirical evidence on the mediating role of trust in online shopping among Indian customers.", "title": "" }, { "docid": "a10b5e26b695b704f2329ff7995d099e", "text": "I draw the reader’s attention to machine teaching, the problem of finding an optimal training set given a machine learning algorithm and a target model. In addition to generating fascinating mathematical questions for computer scientists to ponder, machine teaching holds the promise of enhancing education and personnel training. The Socratic dialogue style aims to stimulate critical thinking.", "title": "" }, { "docid": "8c80b8b0e00fa6163d945f7b1b8f63e5", "text": "In this paper, we propose an architecture model called Design Rule Space (DRSpace). We model the architecture of a software system as multiple overlapping DRSpaces, reflecting the fact that any complex software system must contain multiple aspects, features, patterns, etc. We show that this model provides new ways to analyze software quality. In particular, we introduce an Architecture Root detection algorithm that captures DRSpaces containing large numbers of a project’s bug-prone files, which are called Architecture Roots (ArchRoots). After investigating ArchRoots calculated from 15 open source projects, the following observations become clear: from 35% to 91% of a project’s most bug-prone files can be captured by just 5 ArchRoots, meaning that bug-prone files are likely to be architecturally connected. Furthermore, these ArchRoots tend to live in the system for significant periods of time, serving as the major source of bug-proneness and high maintainability costs. Moreover, each ArchRoot reveals multiple architectural flaws that propagate bugs among files and this will incur high maintenance costs over time. The implication of our study is that the quality, in terms of bug-proneness, of a large, complex software project cannot be fundamentally improved without first fixing its architectural flaws.", "title": "" }, { "docid": "82985f584f51a5e103b29265878335e5", "text": "Orthodontic management for patients with single or bilateral congenitally missing permanent lateral incisors is a challenge to effective treatment planning. Over the last several decades, dentistry has focused on several treatment modalities for replacement of missing teeth. The two major alternative treatment options are orthodontic space closure or space opening for prosthetic replacements. For patients with high aesthetic expectations implants are one of the treatment of choices, especially when it comes to replacement of missing maxillary lateral incisors and mandibular incisors. Edentulous areas where the available bone is compromised to use conventional implants with 2,5 mm or more in diameter, narrow diameter implants with less than 2,5 mm diameter can be successfully used. This case report deals with managing a compromised situation in the region of maxillary lateral incisor using a narrow diameter implant.", "title": "" }, { "docid": "e856bca86bb757d11b30f3a3916fa06c", "text": "A X-band reconfigurable active phased array antenna system is presented. The phased array system consists of interconnected tile modules of which number can be flexibly changed depending on system requirements. The PCB integrated tile module assembles 4×4 patch antennas and flip-chipped phased array 0.13-μm SiGe BiCMOS ICs with 5-bit IF phase shifters. The concept of scalable phased array is verified by narrowing beamwidth in beamforming pattern and improving SNR in data transmission of 64-QAM OFDM signal as increasing the number of antenna elements.", "title": "" }, { "docid": "66acaa4909502a8d7213366e0667c3c2", "text": "Facial rejuvenation, particularly lip augmentation, has gained widespread popularity. An appreciation of perioral anatomy as well as the structural characteristics that define the aging face is critical to achieve optimal patient outcomes. Although techniques and technology evolve continuously, hyaluronic acid (HA) dermal fillers continue to dominate aesthetic practice. A combination approach including neurotoxin and volume restoration demonstrates superior results in select settings.", "title": "" }, { "docid": "669de02f4c87c2a67e776410f70bf801", "text": "Repeating an item in a list benefits recall performance, and this benefit increases when the repetitions are spaced apart (Madigan, 1969; Melton, 1970). Retrieved context theory incorporates 2 mechanisms that account for these effects: contextual variability and study-phase retrieval. Specifically, if an item presented at position i is repeated at position j, this leads to retrieval of its context from its initial presentation at i (study-phase retrieval), and this retrieved context will be used to update the current state of context (contextual variability). Here we consider predictions of a computational model that embodies retrieved context theory, the context maintenance and retrieval model (CMR; Polyn, Norman, & Kahana, 2009). CMR makes the novel prediction that subjects are more likely to successively recall items that follow a shared repeated item (e.g., i + 1, j + 1) because both items are associated with the context of the repeated item presented at i and j. CMR also predicts that the probability of recalling at least 1 of 2 studied items should increase with the items' spacing (Lohnas, Polyn, & Kahana, 2011). We tested these predictions in a new experiment, and CMR's predictions were upheld. These findings suggest that retrieved context theory offers an integrated explanation for repetition and spacing effects in free recall tasks.", "title": "" } ]
scidocsrr
eb97c4e814cfff02c7fc273eab5218f0
3D region segmentation using topological persistence
[ { "docid": "6ed624fa056d1f92cc8e58401ab3036e", "text": "In this paper, we present an approach to segment 3D point cloud data using ideas from persistent homology theory. The proposed algorithms first generate a simplicial complex representation of the point cloud dataset. Next, we compute the zeroth homology group of the complex which corresponds to the number of connected components. Finally, we extract the clusters of each connected component in the dataset. We show that this technique has several advantages over state of the art methods such as the ability to provide a stable segmentation of point cloud data under noisy or poor sampling conditions and its independence of a fixed distance metric.", "title": "" } ]
[ { "docid": "548ca7ecd778bc64e4a3812acd73dcfb", "text": "Inference algorithms of latent Dirichlet allocation (LDA), either for small or big data, can be broadly categorized into expectation-maximization (EM), variational Bayes (VB) and collapsed Gibbs sampling (GS). Looking for a unified understanding of these different inference algorithms is currently an important open problem. In this paper, we revisit these three algorithms from the entropy perspective, and show that EM can achieve the best predictive perplexity (a standard performance metric for LDA accuracy) by minimizing directly the cross entropy between the observed word distribution and LDA's predictive distribution. Moreover, EM can change the entropy of LDA's predictive distribution through tuning priors of LDA, such as the Dirichlet hyperparameters and the number of topics, to minimize the cross entropy with the observed word distribution. Finally, we propose the adaptive EM (AEM) algorithm that converges faster and more accurate than the current state-of-the-art SparseLDA [20] and AliasLDA [12] from small to big data and LDA models. The core idea is that the number of active topics, measured by the residuals between E-steps at successive iterations, decreases significantly, leading to the amortized σ(1) time complexity in terms of the number of topics. The open source code of AEM is available at GitHub.", "title": "" }, { "docid": "40128351f90abde13925799756dc1511", "text": "A new field of forensic accounting has emerged as current practices have been changed in electronic business environment and rapidly increasing fraudulent activities. Despite taking many forms, the fraud is usually theft of funds and information or misuse of someone's information assets. As financial frauds prevail in digital environment, accountants are the most helpful people to investigate them. However, forensic accountants in digital environment, usually called fraud investigators or fraud examiners, must be specially trained to investigate and report digital evidences in the courtroom. In this paper, the authors researched the case of financial fraud forensic analysis of the Microsoft Excel file, as it is very often used in financial reporting. We outlined some of the well-known difficulties involved in tracing the fraudster activities throughout extracted Excel file metadata, and applied a different approach from that well-described in classic postmortem computer system forensic analysis or in data mining techniques application. In the forensic examination steps we used open source code, Deft 7.1 (Digital evidence & forensic toolkit) and verified results by the other forensic tools, Meld a visual diff and merge tool to compare files and directories and KDiff tool, too. We proposed an integrated forensic accounting, functional model as a combined accounting, auditing and digital forensic investigative process. Before this approach can be properly validated some future work needs to be done, too.", "title": "" }, { "docid": "e2302f7cd00b4c832a6a708dc6775739", "text": "This article provides theoretically and practically grounded assistance to companies that are today engaged primarily in non‐digital industries in the development and implementation of business models that use the Internet of Things. To that end, we investigate the role of the Internet in business models in general in the first section. We conclude that the significance of the Internet in business model innovation has increased steadily since the 1990s, that each new Internet wave has given rise to new digital business model patterns, and that the biggest breakthroughs to date have been made in digital industries. In the second section, we show that digital business model patterns have now become relevant in physical industries as well. The separation between physical and digital industries is now consigned to the past. The key to this transformation is the Internet of Things which makes possible hybrid solutions that merge physical products and digital services. From this, we derive very general business model logic for the Internet of Things and some specific components and patterns for business models. Finally we sketch out the central challenges faced in implementing such hybrid business models and point to possible solutions. The Influence of the Internet on Business Models to Date", "title": "" }, { "docid": "6b4a4e5271f5a33d3f30053fc6c1a4ff", "text": "Based on environmental, legal, social, and economic factors, reverse logistics and closed-loop supply chain issues have attracted attention among both academia and practitioners. This attention is evident by the vast number of publications in scientific journals which have been published in recent years. Hence, a comprehensive literature review of recent and state-of-the-art papers is vital to draw a framework of the past, and to shed light on future directions. The aim of this paper is to review recently published papers in reverse logistic and closed-loop supply chain in scientific journals. A total of 382 papers published between January 2007 and March 2013 are selected and reviewed. The papers are then analyzed and categorized to construct a useful foundation of past research. Finally, gaps in the literature are identified to clarify and to suggest future research opportunities. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0b18f7966a57e266487023d3a2f3549d", "text": "A clear andpowerfulformalism for describing languages, both natural and artificial, follows f iom a method for expressing grammars in logic due to Colmerauer and Kowalski. This formalism, which is a natural extension o f context-free grammars, we call \"definite clause grammars\" (DCGs). A DCG provides not only a description of a language, but also an effective means for analysing strings o f that language, since the DCG, as it stands, is an executable program o f the programming language Prolog. Using a standard Prolog compiler, the DCG can be compiled into efficient code, making it feasible to implement practical language analysers directly as DCGs. This paper compares DCGs with the successful and widely used augmented transition network (ATN) formalism, and indicates how ATNs can be translated into DCGs. It is argued that DCGs can be at least as efficient as ATNs, whilst the DCG formalism is clearer, more concise and in practice more powerful", "title": "" }, { "docid": "ae167d6e1ff2b1ee3bd23e3e02800fab", "text": "The aim of this paper is to improve the classification performance based on the multiclass imbalanced datasets. In this paper, we introduce a new resampling approach based on Clustering with sampling for Multiclass Imbalanced classification using Ensemble (C-MIEN). C-MIEN uses the clustering approach to create a new training set for each cluster. The new training sets consist of the new label of instances with similar characteristics. This step is applied to reduce the number of classes then the complexity problem can be easily solved by C-MIEN. After that, we apply two resampling techniques (oversampling and undersampling) to rebalance the class distribution. Finally, the class distribution of each training set is balanced and ensemble approaches are used to combine the models obtained with the proposed method through majority vote. Moreover, we carefully design the experiments and analyze the behavior of C-MIEN with different parameters (imbalance ratio and number of classifiers). The experimental results show that C-MIEN achieved higher performance than state-of-the-art methods.", "title": "" }, { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "b4c395b97f0482f3c1224ed6c8623ac2", "text": "The Scientific Computation Language (SCL) was designed mainly for developing computational models in education and research. This paper presents the justification for such a language, its relevant features, and a case study of a computational model implemented with the SCL.\n Development of the SCL language is part of the OOPsim project, which has had partial NSF support (CPATH). One of the goals of this project is to develop tools and approaches for designing and implementing computational models, emphasizing multi-disciplinary teams in the development process.\n A computational model is a computer implementation of the solution to a (scientific) problem for which a mathematical representation has been formulated. Developing a computational model consists of applying Computer Science concepts, principles and methods.\n The language syntax is defined at a higher level of abstraction than C, and includes language statements for improving program readability, debugging, maintenance, and correctness. The language design was influenced by Ada, Pascal, Eiffel, Java, C, and C++.\n The keywords have been added to maintain full compatibility with C. The SCL language translator is an executable program that is implemented as a one-pass language processor that generates C source code. The generated code can be integrated conveniently with any C and/or C++ library, on Linux and Windows (and MacOS). The semantics of SCL is informally defined to be the same C semantics.", "title": "" }, { "docid": "2f4a4c223c13c4a779ddb546b3e3518c", "text": "Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (nonpoisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.", "title": "" }, { "docid": "81a45cb4ca02c38839a81ad567eb1491", "text": "Big data is often mined using clustering algorithms. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a popular spatial clustering algorithm. However, it is computationally expensive and thus for clustering big data, parallel processing is required. The two prevalent paradigms for parallel processing are High-Performance Computing (HPC) based on Message Passing Interface (MPI) or Open Multi-Processing (OpenMP) and the newer big data frameworks such as Apache Spark or Hadoop. This report surveys for these two different paradigms publicly available implementations that aim at parallelizing DBSCAN and compares their performance. As a result, it is found that the big data implementations are not yet mature and in particular for skewed data, the implementation’s decomposition of the input data into parallel tasks has a huge influence on the performance in terms of running time.", "title": "" }, { "docid": "8aadc690d86ad4c015a4a82a32336336", "text": "The complexities of various search algorithms are considered in terms of time, space, and cost of the solution paths. • Brute-force search . Breadth-first search (BFS) . Depth-first search (DFS) . Depth-first Iterative-deepening (DFID) . Bi-directional search • Heuristic search: best-first search . A∗ . IDA∗ The issue of storing information in DISK instead of main memory. Solving 15-puzzle. TCG: DFID, 20121120, Tsan-sheng Hsu c © 2", "title": "" }, { "docid": "e740e5ff2989ce414836c422c45570a9", "text": "Many organizations desired to operate their businesses, works and services in a mobile (i.e. just in time and anywhere), dynamic, and knowledge-oriented fashion. Activities like e-learning, environmental learning, remote inspection, health-care, home security and safety mechanisms etc. requires a special infrastructure that might provide continuous, secured, reliable and mobile data with proper information/ knowledge management system in context to their confined environment and its users. An indefinite number of sensor networks for numerous healthcare applications has been designed and implemented but they all lacking extensibility, fault-tolerance, mobility, reliability and openness. Thus, an open, flexible and rearrangeable infrastructure is proposed for healthcare monitoring applications. Where physical sensors are virtualized as virtual sensors on cloud computing by this infrastructure and virtual sensors are provisioned automatically to end users whenever they required. In this paper we reviewed some approaches to hasten the service creations in field of healthcare and other applications with Cloud-Sensor architecture. This architecture provides services to end users without being worried about its implementation details. The architecture allows the service requesters to use the virtual sensors by themselves or they may create other new services by extending virtual sensors.", "title": "" }, { "docid": "459de602bf6e46ad4b752f2e51c81ffa", "text": "Self-adaptation is an essential feature of natural evolution. However, in the context of function optimization, self-adaptation features of evolutionary search algorithms have been explored mainly with evolution strategy (ES) and evolutionary programming (EP). In this paper, we demonstrate the self-adaptive feature of real-parameter genetic algorithms (GAs) using a simulated binary crossover (SBX) operator and without any mutation operator. The connection between the working of self-adaptive ESs and real-parameter GAs with the SBX operator is also discussed. Thereafter, the self-adaptive behavior of real-parameter GAs is demonstrated on a number of test problems commonly used in the ES literature. The remarkable similarity in the working principle of real-parameter GAs and self-adaptive ESs shown in this study suggests the need for emphasizing further studies on self-adaptive GAs.", "title": "" }, { "docid": "587f7821fc7ecfe5b0bbbd3b08b9afe2", "text": "The most commonly used method for cuffless blood pressure (BP) measurement is using pulse transit time (PTT), which is based on Moens-Korteweg (M-K) equation underlying the assumption that arterial geometries such as the arterial diameter keep unchanged. However, the arterial diameter is dynamic which varies over the cardiac cycle, and it is regulated through the contraction or relaxation of the vascular smooth muscle innervated primarily by the sympathetic nervous system. This may be one of the main reasons that impair the BP estimation accuracy. In this paper, we propose a novel indicator, the photoplethysmogram (PPG) intensity ratio (PIR), to evaluate the arterial diameter change. The deep breathing (DB) maneuver and Valsalva maneuver (VM) were performed on five healthy subjects for assessing parasympathetic and sympathetic nervous activities, respectively. Heart rate (HR), PTT, PIR and BP were measured from the simultaneously recorded electrocardiogram (ECG), PPG, and continuous BP. It was found that PIR increased significantly from inspiration to expiration during DB, whilst BP dipped correspondingly. Nevertheless, PIR changed positively with BP during VM. In addition, the spectral analysis revealed that the dominant frequency component of PIR, HR and SBP, shifted significantly from high frequency (HF) to low frequency (LF), but not obvious in that of PTT. These results demonstrated that PIR can be potentially used to evaluate the smooth muscle tone which modulates arterial BP in the LF range. The PTT-based BP measurement that take into account the PIR could therefore improve its estimation accuracy.", "title": "" }, { "docid": "ba4600c9c8e4c1bfcec9fa8fcde0f05c", "text": "While things (i.e., technologies) play a crucial role in creating and shaping meaningful, positive experiences, their true value lies only in the resulting experiences. It is about what we can do and experience with a thing, about the stories unfolding through using a technology, not about its styling, material, or impressive list of features. This paper explores the notion of \"experiences\" further: from the link between experiences, well-being, and people's developing post-materialistic stance to the challenges of the experience market and the experience-driven design of technology.", "title": "" }, { "docid": "1ed9f257129a45388fcf976b87e37364", "text": "Mobile cloud computing is an extension of cloud computing that allow the users to access the cloud service via their mobile devices. Although mobile cloud computing is convenient and easy to use, the security challenges are increasing significantly. One of the major issues is unauthorized access. Identity Management enables to tackle this issue by protecting the identity of users and controlling access to resources. Although there are several IDM frameworks in place, they are vulnerable to attacks like timing attacks in OAuth, malicious code attack in OpenID and huge amount of information leakage when user’s identity is compromised in Single Sign-On. Our proposed framework implicitly authenticates a user based on user’s typing behavior. The authentication information is encrypted into homomorphic signature before being sent to IDM server and tokens are used to authorize users to access the cloud resources. Advantages of our proposed framework are: user’s identity protection and prevention from unauthorized access.", "title": "" }, { "docid": "03b2876a4b62a6e10e8523cccc32452a", "text": "Millions of people regularly report the details of their real-world experiences on social media. This provides an opportunity to observe the outcomes of common and critical situations. Identifying and quantifying these outcomes may provide better decision-support and goal-achievement for individuals, and help policy-makers and scientists better understand important societal phenomena. We address several open questions about using social media data for open-domain outcome identification: Are the words people are more likely to use after some experience relevant to this experience? How well do these words cover the breadth of outcomes likely to occur for an experience? What kinds of outcomes are discovered? Studying 3-months of Twitter data capturing people who experienced 39 distinct situations across a variety of domains, we find that these outcomes are generally found to be relevant (55-100% on average) and that causally related concepts are more likely to be discovered than conceptual or semantically related concepts.", "title": "" }, { "docid": "ff08d2e0d53f2d9a7d49f0fdd820ec7a", "text": "Milk contains numerous nutrients. The content of n-3 fatty acids, the n-6/n-3 ratio, and short- and medium-chain fatty acids may promote positive health effects. In Western societies, cow’s milk fat is perceived as a risk factor for health because it is a source of a high fraction of saturated fatty acids. Recently, there has been increasing interest in donkey’s milk. In this work, the fat and energetic value and acidic composition of donkey’s milk, with reference to human nutrition, and their variations during lactation, were investigated. We also discuss the implications of the acidic profile of donkey’s milk on human nutrition. Individual milk samples from lactating jennies were collected 15, 30, 45, 60, 90, 120, 150, 180 and 210days after foaling, for the analysis of fat, proteins and lactose, which was achieved using an infrared milk analyser, and fatty acids composition by gas chromatography. The donkey’s milk was characterised by low fat and energetic (1719.2kJ·kg-1) values, a high polyunsaturated fatty acids (PUFA) content of mainly α-linolenic acid (ALA) and linoleic acid (LA), a low n-6 to n-3 FA ratio or LA/ALA ratio, and advantageous values of atherogenic and thrombogenic indices. Among the minor PUFA, docosahesaenoic (DHA), eicosapentanoic (EPA), and arachidonic (AA) acids were present in very small amounts (<1%). In addition, the AA/EPA ratio was low (0.18). The fat and energetic values decreased (P < 0.01) during lactation. The fatty acid patterns were affected by the lactation stage and showed a decrease (P < 0.01) in saturated fatty acids content and an increase (P < 0.01) in the unsaturated fatty acids content. The n-6 to n-3 ratio and the LA/ALA ratio were approximately 2:1, with values <1 during the last period of lactation, suggesting the more optimal use of milk during this period. The high level of unsaturated/saturated fatty acids and PUFA-n3 content and the low n-6/n-3 ratio suggest the use of donkey’s milk as a functional food for human nutrition and its potential utilisation for infant nutrition as well as adult diets, particular for the elderly.", "title": "" }, { "docid": "8cbd4a4adf82c385a6c821fde08d16e9", "text": "The internet of things (IOT) is the new revolution of internet after PCS and ServersClients communication now sensors, smart object, wearable devices, and smart phones are able to communicate. Everything surrounding us can talk to each other. life will be easier and smarter with smart environment, smart homes,smart cities and intelligent transport and healthcare.Billions of devices will be communicating wirelessly is a real huge challenge to our security and privacy.IOT requires efficient and effective security solutions which satisfies IOT requirements, The low power, small memory and limited computational capabilities . This paper addresses various standards, protocols and technologies of IOT and different security attacks which may compromise IOT security and privacy.", "title": "" }, { "docid": "de4d14afaf6a24fcd831e2a293c30fc3", "text": "Artistic style transfer can be thought as a process to generate different versions of abstraction of the original image. However, most of the artistic style transfer operators are not optimized for human faces thus mainly suffers from two undesirable features when applying them to selfies. First, the edges of human faces may unpleasantly deviate from the ones in the original image. Second, the skin color is far from faithful to the original one which is usually problematic in producing quality selfies. In this paper, we take a different approach and formulate this abstraction process as a gradient domain learning problem. We aim to learn a type of abstraction which not only achieves the specified artistic style but also circumvents the two aforementioned drawbacks thus highly applicable to selfie photography. We also show that our method can be directly generalized to videos with high inter-frame consistency. Our method is also robust to non-selfie images, and the generalization to various kinds of real-life scenes is discussed. We will make our code publicly available.", "title": "" } ]
scidocsrr
1344b287c0ab3d80c035ac740d55dd32
Broadband microstrip-line-fed circularly-polarized circular slot antenna
[ { "docid": "fb3e9503a9f4575f5ecdbfaaa80638d0", "text": "This paper presents a new wideband circularly polarized square slot antenna (CPSSA) with a coplanar waveguide (CPW) feed. The proposed antenna features two inverted-L grounded strips around two opposite corners of the slot and a widened tuning stub protruded into the slot from the signal strip of the CPW. Broadside circular-polarization (CP) radiation can be easily obtained using a simple design procedure. For the optimized antenna prototype, the measured bandwidth with an axial ratio (AR) of less than 3 dB is larger than 25% and the measured VSWR les 2 impedance bandwidth is as large as 52%.", "title": "" } ]
[ { "docid": "281eb03143a40df5b0267ac45bbd4f3e", "text": "The biology of fracture healing is a complex biological process that follows specific regenerative patterns and involves changes in the expression of several thousand genes. Although there is still much to be learned to fully comprehend the pathways of bone regeneration, the over-all pathways of both the anatomical and biochemical events have been thoroughly investigated. These efforts have provided a general understanding of how fracture healing occurs. Following the initial trauma, bone heals by either direct intramembranous or indirect fracture healing, which consists of both intramembranous and endochondral bone formation. The most common pathway is indirect healing, since direct bone healing requires an anatomical reduction and rigidly stable conditions, commonly only obtained by open reduction and internal fixation. However, when such conditions are achieved, the direct healing cascade allows the bone structure to immediately regenerate anatomical lamellar bone and the Haversian systems without any remodelling steps necessary. In all other non-stable conditions, bone healing follows a specific biological pathway. It involves an acute inflammatory response including the production and release of several important molecules, and the recruitment of mesenchymal stem cells in order to generate a primary cartilaginous callus. This primary callus later undergoes revascularisation and calcification, and is finally remodelled to fully restore a normal bone structure. In this article we summarise the basic biology of fracture healing.", "title": "" }, { "docid": "fcd320ce68efa45dace6b798aa64dacd", "text": "We focus on two leading state-of-the-art approaches to grammatical error correction – machine learning classification and machine translation. Based on the comparative study of the two learning frameworks and through error analysis of the output of the state-of-the-art systems, we identify key strengths and weaknesses of each of these approaches and demonstrate their complementarity. In particular, the machine translation method learns from parallel data without requiring further linguistic input and is better at correcting complex mistakes. The classification approach possesses other desirable characteristics, such as the ability to easily generalize beyond what was seen in training, the ability to train without human-annotated data, and the flexibility to adjust knowledge sources for individual error types. Based on this analysis, we develop an algorithmic approach that combines the strengths of both methods. We present several systems based on resources used in previous work with a relative improvement of over 20% (and 7.4 F score points) over the previous state-of-the-art.", "title": "" }, { "docid": "ec48c3ba506409be7219320fe8e263ca", "text": "Cyber scanning refers to the task of probing enterprise networks or Internet wide services, searching for vulnerabilities or ways to infiltrate IT assets. This misdemeanor is often the primarily methodology that is adopted by attackers prior to launching a targeted cyber attack. Hence, it is of paramount importance to research and adopt methods for the detection and attribution of cyber scanning. Nevertheless, with the surge of complex offered services from one side and the proliferation of hackers' refined, advanced, and sophisticated techniques from the other side, the task of containing cyber scanning poses serious issues and challenges. Furthermore recently, there has been a flourishing of a cyber phenomenon dubbed as cyber scanning campaigns - scanning techniques that are highly distributed, possess composite stealth capabilities and high coordination - rendering almost all current detection techniques unfeasible. This paper presents a comprehensive survey of the entire cyber scanning topic. It categorizes cyber scanning by elaborating on its nature, strategies and approaches. It also provides the reader with a classification and an exhaustive review of its techniques. Moreover, it offers a taxonomy of the current literature by focusing on distributed cyber scanning detection methods. To tackle cyber scanning campaigns, this paper uniquely reports on the analysis of two recent cyber scanning incidents. Finally, several concluding remarks are discussed.", "title": "" }, { "docid": "2f08b35bb6f4f9d44d1225e2d26b5395", "text": "An efficient disparity estimation and occlusion detection algorithm for multiocular systems is presented. A dynamic programming algorithm, using a multiview matching cost as well as pure geometrical constraints, is used to estimate disparity and to identify the occluded areas in the extreme left and right views. A significant advantage of the proposed approach is that the exact number of views in which each point appears (is not occluded) can be determined. The disparity and occlusion information obtained may then be used to create virtual images from intermediate viewpoints. Furthermore, techniques are developed for the coding of occlusion and disparity information, which is needed at the receiver for the reproduction of a multiview sequence using the two encoded extreme views. Experimental results illustrate the performance of the proposed techniques.", "title": "" }, { "docid": "5e5b2f8a3cc512ee2db165013a5a4782", "text": "The purpose of this project was to develop a bidimensional measure of mindfulness to assess its two key components: present-moment awareness and acceptance. The development and psychometric validation of the Philadelphia Mindfulness Scale is described, and data are reported from expert raters, two nonclinical samples (n = 204 and 559), and three clinical samples including mixed psychiatric outpatients (n = 52), eating disorder inpatients (n = 30), and student counseling center outpatients (n = 78). Exploratory and confirmatory factor analyses support a two-factor solution, corresponding to the two constituent components of the construct. Good internal consistency was demonstrated, and relationships with other constructs were largely as expected. As predicted, significant differences were found between the nonclinical and clinical samples in levels of awareness and acceptance. The awareness and acceptance subscales were not correlated, suggesting that these two constructs can be examined independently. Potential theoretical and applied uses of the measure are discussed.", "title": "" }, { "docid": "3f48327ca2125df3a6da0c1e54131013", "text": "Background: We investigated the value of magnetic resonance imaging (MRI) in the evaluation of sex-reassignment surgery in male-to-female transsexual patients. Methods: Ten male-to-female transsexual patients who underwent sex-reassignment surgery with inversion of combined penile and scrotal skin flaps for vaginoplasty were examined after surgery with MRI. Turbo spin-echo T2-weighted and spin-echo T1-weighted images were obtained in sagittal, coronal, and axial planes with a 1.5-T superconductive magnet. Images were acquired with and without an inflatable silicon vaginal tutor. The following parameters were evaluated: neovaginal depth, neovaginal inclination in the sagittal plane, presence of remnants of the corpus spongiosum and corpora cavernosa, and thickness of the rectovaginal septum. Results: The average neovaginal depth was 7.9 cm (range = 5–10 cm). The neovagina had a correct oblique inclination in the sagittal plane in four patients, no inclination in five, and an incorrect inclination in one. In seven patients, MRI showed remnants of the corpora cavernosa and/or of the corpus spongiosum; in three patients, no remnants were detected. The average thickness of the rectovaginal septum was 4 mm (range = 3–6 mm). Conclusion: MRI allows a detailed assessment of the pelvic anatomy after genital reconfiguration and provides information that can help the surgeon to adopt the most correct surgical approach.", "title": "" }, { "docid": "78a8eb1c05d8af52ca32ba29b3fcf89b", "text": "Pediatric firearm-related deaths and injuries are a national public health crisis. In this Special Review Article, we characterize the epidemiology of firearm-related injuries in the United States and discuss public health programs, the role of pediatricians, and legislative efforts to address this health crisis. Firearm-related injuries are leading causes of unintentional injury deaths in children and adolescents. Children are more likely to be victims of unintentional injuries, the majority of which occur in the home, and adolescents are more likely to suffer from intentional injuries due to either assault or suicide attempts. Guns are present in 18% to 64% of US households, with significant variability by geographic region. Almost 40% of parents erroneously believe their children are unaware of the storage location of household guns, and 22% of parents wrongly believe that their children have never handled household guns. Public health interventions to increase firearm safety have demonstrated varying results, but the most effective programs have provided free gun safety devices to families. Pediatricians should continue working to reduce gun violence by asking patients and their families about firearm access, encouraging safe storage, and supporting firearm-related injury prevention research. Pediatricians should also play a role in educating trainees about gun violence. From a legislative perspective, universal background checks have been shown to decrease firearm homicides across all ages, and child safety laws have been shown to decrease unintentional firearm deaths and suicide deaths in youth. A collective, data-driven public health approach is crucial to halt the epidemic of pediatric firearm-related injury.", "title": "" }, { "docid": "84195c27330dad460b00494ead1654c8", "text": "We present a unified framework for the computational implementation of syntactic, semantic, pragmatic and even \"stylistic\" constraints on anaphora. We build on our BUILDRS implementation of Discourse Representation (DR) Theory and Lexical Functional Grammar (LFG) discussed in Wada & Asher (1986). We develop and argue for a semantically based processing model for anaphora resolution that exploits a number of desirable features: (1) the partial semantics provided by the discourse representation structures (DRSs) of DR theory, (2) the use of syntactic and lexical features to filter out unacceptable potential anaphoric antecedents from the set of logically possible antecedents determined by the logical structure of the DRS, (3) the use of pragmatic or discourse constraints, noted by those working on focus, to impose a salience ordering on the set of grammatically acceptable potential antecedents. Only where there is a marked difference in the degree of salience among the possible antecedents does the salience ranking allow us to make predictions on preferred readings. In cases where the difference is extreme, we predict the discourse to be infelicitous if, because of other constraints, one of the markedly less salient antecedents must be linked with the pronoun. We also briefly consider the applications of our processing model to other definite noun phrases besides anaphoric pronouns.", "title": "" }, { "docid": "77d0845463db0f4e61864b37ec1259b7", "text": "A new form of the variational autoencoder (VAE) is proposed, based on the symmetric KullbackLeibler divergence. It is demonstrated that learning of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.", "title": "" }, { "docid": "bf04d5a87fbac1157261fac7652b9177", "text": "We consider the partitioning of a society into coalitions in purely hedonic settings; i.e., where each player's payo is completely determined by the identity of other members of her coalition. We rst discuss how hedonic and non-hedonic settings di er and some su cient conditions for the existence of core stable coalition partitions in hedonic settings. We then focus on a weaker stability condition: individual stability, where no player can bene t from moving to another coalition while not hurting the members of that new coalition. We show that if coalitions can be ordered according to some characteristic over which players have single-peaked preferences, or where players have symmetric and additively separable preferences, then there exists an individually stable coalition partition. Examples show that without these conditions, individually stable coalition partitions may not exist. We also discuss some other stability concepts, and the incompatibility of stability with other normative properties.", "title": "" }, { "docid": "97c81cfa85ff61b999ae8e565297a16e", "text": "This paper describes the complete implementation of a blind image denoising algorithm, that takes any digital image as input. In a first step the algorithm estimates a Signal and Frequency Dependent (SFD) noise model. In a second step, the image is denoised by a multiscale adaptation of the Non-local Bayes denoising method. We focus here on a careful analysis of the denoising step and present a detailed discussion of the influence of its parameters. Extensive commented tests of the blind denoising algorithm are presented, on real JPEG images and on scans of old photographs. Source Code The source code (ANSI C), its documentation, and the online demo are accessible at the IPOL web page of this article1.", "title": "" }, { "docid": "d43f56f13fee5b45cb31233e61aa20d0", "text": "An automated brain tumor segmentation method was developed and validated against manual segmentation with three-dimensional magnetic resonance images in 20 patients with meningiomas and low-grade gliomas. The automated method (operator time, 5-10 minutes) allowed rapid identification of brain and tumor tissue with an accuracy and reproducibility comparable to those of manual segmentation (operator time, 3-5 hours), making automated segmentation practical for low-grade gliomas and meningiomas.", "title": "" }, { "docid": "cf62cb1e0b3cac894a277762808c68e0", "text": "-Most educational institutions’ administrators are concerned about student irregular attendance. Truancies can affect student overall academic performance. The conventional method of taking attendance by calling names or signing on paper is very time consuming and insecure, hence inefficient. Therefore, computer based student attendance management system is required to assist the faculty and the lecturer for this time-provide much convenient method to take attendance, but some prerequisites has to be done before start using the program. Although the use of RFID systems in educational institutions is not new, it is intended to show how the use of it came to solve daily problems in our university. The system has been built using the web-based applications such as ASP.NET and IIS server to cater the recording and reporting of the students’ attendances The system can be easily accessed by the lecturers via the web and most importantly, the reports can be generated in real-time processing, thus, providing valuable information about the students’.", "title": "" }, { "docid": "cd16afd19a0ac72cd3453a7b59aad42b", "text": "BACKGROUND\nIncreased flexibility is often desirable immediately prior to sports performance. Static stretching (SS) has historically been the main method for increasing joint range-of-motion (ROM) acutely. However, SS is associated with acute reductions in performance. Foam rolling (FR) is a form of self-myofascial release (SMR) that also increases joint ROM acutely but does not seem to reduce force production. However, FR has never previously been studied in resistance-trained athletes, in adolescents, or in individuals accustomed to SMR.\n\n\nOBJECTIVE\nTo compare the effects of SS and FR and a combination of both (FR+SS) of the plantarflexors on passive ankle dorsiflexion ROM in resistance-trained, adolescent athletes with at least six months of FR experience.\n\n\nMETHODS\nEleven resistance-trained, adolescent athletes with at least six months of both resistance-training and FR experience were tested on three separate occasions in a randomized cross-over design. The subjects were assessed for passive ankle dorsiflexion ROM after a period of passive rest pre-intervention, immediately post-intervention and after 10, 15, and 20 minutes of passive rest. Following the pre-intervention test, the subjects randomly performed either SS, FR or FR+SS. SS and FR each comprised 3 sets of 30 seconds of the intervention with 10 seconds of inter-set rest. FR+SS comprised the protocol from the FR condition followed by the protocol from the SS condition in sequence.\n\n\nRESULTS\nA significant effect of time was found for SS, FR and FR+SS. Post hoc testing revealed increases in ROM between baseline and post-intervention by 6.2% for SS (p < 0.05) and 9.1% for FR+SS (p < 0.05) but not for FR alone. Post hoc testing did not reveal any other significant differences between baseline and any other time point for any condition. A significant effect of condition was observed immediately post-intervention. Post hoc testing revealed that FR+SS was superior to FR (p < 0.05) for increasing ROM.\n\n\nCONCLUSIONS\nFR, SS and FR+SS all lead to acute increases in flexibility and FR+SS appears to have an additive effect in comparison with FR alone. All three interventions (FR, SS and FR+SS) have time courses that lasted less than 10 minutes.\n\n\nLEVEL OF EVIDENCE\n2c.", "title": "" }, { "docid": "700a6c2741affdbdc2a5dd692130ebb0", "text": "Automated tools for understanding application behavior and its changes during the application lifecycle are essential for many performance analysis and debugging tasks. Application performance issues have an immediate impact on customer experience and satisfaction. A sudden slowdown of enterprise-wide application can effect a large population of customers, lead to delayed projects, and ultimately can result in company financial loss. Significantly shortened time between new software releases further exacerbates the problem of thoroughly evaluating the performance of an updated application. Our thesis is that online performance modeling should be a part of routine application monitoring. Early, informative warnings on significant changes in application performance should help service providers to timely identify and prevent performance problems and their negative impact on the service. We propose a novel framework for automated anomaly detection and application change analysis. It is based on integration of two complementary techniques: (i) a regression-based transaction model that reflects a resource consumption model of the application, and (ii) an application performance signature that provides a compact model of runtime behavior of the application. The proposed integrated framework provides a simple and powerful solution for anomaly detection and analysis of essential performance changes in application behavior. An additional benefit of the proposed approach is its simplicity: It is not intrusive and is based on monitoring data that is typically available in enterprise production environments. The introduced solution further enables the automation of capacity planning and resource provisioning tasks of multitier applications in rapidly evolving IT environments.", "title": "" }, { "docid": "77f60100af0c9556e5345ee1b04d8171", "text": "SDNET2018 is an annotated image dataset for training, validation, and benchmarking of artificial intelligence based crack detection algorithms for concrete. SDNET2018 contains over 56,000 images of cracked and non-cracked concrete bridge decks, walls, and pavements. The dataset includes cracks as narrow as 0.06 mm and as wide as 25 mm. The dataset also includes images with a variety of obstructions, including shadows, surface roughness, scaling, edges, holes, and background debris. SDNET2018 will be useful for the continued development of concrete crack detection algorithms based on deep convolutional neural networks (DCNNs), which are a subject of continued research in the field of structural health monitoring. The authors present benchmark results for crack detection using SDNET2018 and a crack detection algorithm based on the AlexNet DCNN architecture. SDNET2018 is freely available at https://doi.org/10.15142/T3TD19.", "title": "" }, { "docid": "b6ceacf3ad3773acddc3452933b57a0f", "text": "The growing interest in robots that interact safely with humans and surroundings have prompted the need for soft structural embodiments including soft actuators. This paper explores a class of soft actuators inspired in design and construction by Pneumatic Artificial Muscles (PAMs) or McKibben Actuators. These bio-inspired actuators consist of fluid-filled elastomeric enclosures that are reinforced with fibers along a specified orientation and are in general referred to as Fiber-Reinforced Elastomeric Enclosures (FREEs). Several recent efforts have mapped the fiber configurations to instantaneous deformation, forces, and moments generated by these actuators upon pressurization with fluid. However most of the actuators, when deployed undergo large deformations and large overall motions thus necessitating the study of their large-deformation kinematics. This paper analyzes the large deformation kinematics of FREEs. A concept called configuration memory effect is proposed to explain the smart nature of these actuators. This behavior is tested with experiments and finite element modeling for a small sample of actuators. The paper also describes different possibilities and design implications of the large deformation behavior of FREEs in successful creation of soft robots.", "title": "" }, { "docid": "eccbc87e4b5ce2fe28308fd9f2a7baf3", "text": "3", "title": "" }, { "docid": "18498166845b27890110c3ca0cd43d86", "text": "Raine Mäntysalo The purpose of this article is to make an overview of postWWII urban planning theories from the point of view of participation. How have the ideas of public accountability, deliberative democracy and involvement of special interests developed from one theory to another? The urban planning theories examined are rational-comprehensive planning theory, advocacy planning theory, incrementalist planning theory and the two branches of communicative planning theory: planning as consensus-seeking and planning as management of conflicts.", "title": "" } ]
scidocsrr
a82b5f0f33766489ce3850beaf3612e8
Meta Networks
[ { "docid": "592eddc5ada1faf317571e8050d4d82e", "text": "Connectionist models usually have a single weight on each connection. Some interesting new properties emerge if each connection has two weights: A slowly changing, plastic weight which stores long-term knowledge and a fast-changing, elastic weight which stores temporary knowledge and spontaneously decays towards zero. If a network learns a set of associations and then these associations are \"blurred\" by subsequent learning, all the original associations can be \"deblurred\" by rehearsing on just a few of them. The rehearsal allows the fast weights to take on values that temporarily cancel out the changes in the slow weights caused by the subsequent learning.", "title": "" }, { "docid": "66e5c7802dc1f3427dc608696a925f6d", "text": "Until recently, research on artificial neural networks was largely restricted to systems with only two types of variable: Neural activities that represent the current or recent input and weights that learn to capture regularities among inputs, outputs and payoffs. There is no good reason for this restriction. Synapses have dynamics at many different time-scales and this suggests that artificial neural networks might benefit from variables that change slower than activities but much faster than the standard weights. These “fast weights” can be used to store temporary memories of the recent past and they provide a neurally plausible way of implementing the type of attention to the past that has recently proved very helpful in sequence-to-sequence models. By using fast weights we can avoid the need to store copies of neural activity patterns.", "title": "" }, { "docid": "a4bfad793a7dde2c8b7e0238b1ffc536", "text": "Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm, which we believe to be the first method that can automatically discover a better algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and/or the final objective value.", "title": "" } ]
[ { "docid": "f5b500c143fd584423ee8f0467071793", "text": "Drug-Drug Interactions (DDIs) are major causes of morbidity and treatment inefficacy. The prediction of DDIs for avoiding the adverse effects is an important issue. There are many drug-drug interaction pairs, it is impossible to do in vitro or in vivo experiments for all the possible pairs. The limitation of DDIs research is the high costs. Many drug interactions are due to alterations in drug metabolism by enzymes. The most common among these enzymes are cytochrome P450 enzymes (CYP450). Drugs can be substrate, inhibitor or inducer of CYP450 which will affect metabolite of other drugs. This paper proposes enzyme action crossing attribute creation for DDIs prediction. Machine learning techniques, k-Nearest Neighbor (k-NN), Neural Networks (NNs), and Support Vector Machine (SVM) were used to find DDIs for simvastatin based on enzyme action crossing. SVM preformed the best providing the predictions at the accuracy of 70.40 % and of 81.85 % with balance and unbalance class label datasets respectively. Enzyme action crossing method provided the new attribute that can be used to predict drug-drug interactions.", "title": "" }, { "docid": "7730b770c0be4a86a926cbae902c1416", "text": "In this paper, we propose an end-to-end trainable Convolutional Neural Network (CNN) architecture called the M-net, for segmenting deep (human) brain structures from Magnetic Resonance Images (MRI). A novel scheme is used to learn to combine and represent 3D context information of a given slice in a 2D slice. Consequently, the M-net utilizes only 2D convolution though it operates on 3D data, which makes M-net memory efficient. The segmentation method is evaluated on two publicly available datasets and is compared against publicly available model based segmentation algorithms as well as other classification based algorithms such as Random Forrest and 2D CNN based approaches. Experiment results show that the M-net outperforms all these methods in terms of dice coefficient and is at least 3 times faster than other methods in segmenting a new volume which is attractive for clinical use.", "title": "" }, { "docid": "c26b4db8f52e4270f24c16b0e65c8b59", "text": "An open stub feed planar patch antenna is proposed for UHF RFID tag mountable on metallic objects. Compared to conventional short stub feed patch antenna used for UHF RFID tag, the open stub feed patch antenna has planar structure which can decrease the manufacturing cost of the tags. Moreover, the open stub feed makes the impedance of the patch antenna be tuned in a large scale for conjugate impedance matching. Modeling and simulation results are presented which are in good agreement with the measurement data. Finally, differences between the open stub feed patch antenna and the short stub feed patch antenna for UHF RFID tag are discussed.", "title": "" }, { "docid": "9f6fb1de80f4500384097978c3712c68", "text": "Reflection is a language feature which allows to analyze and transform the behavior of classes at the runtime. Reflection is used for software debugging and testing. Malware authors can leverage reflection to subvert the malware detection by static analyzers. Reflection initializes the class, invokes any method of class, or accesses any field of class. But, instead of utilizing usual programming language syntax, reflection passes classes/methods etc. as parameters to reflective APIs. As a consequence, these parameters can be constructed dynamically or can be encrypted by malware. These cannot be detected by state-of-the-art static tools. We propose EspyDroid, a system that combines dynamic analysis with code instrumentation for a more precise and automated detection of malware employing reflection. We evaluate EspyDroid on 28 benchmark apps employing major reflection categories. Our technique show improved results over FlowDroid via detection of additional undetected flows. These flows have potential to leak sensitive and private information of the users, through various sinks.", "title": "" }, { "docid": "71b6f880ae22e8032950379cd57b5003", "text": "Our goal is to generate reading lists for students that help them optimally learn technical material. Existing retrieval algorithms return items directly relevant to a query but do not return results to help users read about the concepts supporting their query. This is because the dependency structure of concepts that must be understood before reading material pertaining to a given query is never considered. Here we formulate an information-theoretic view of concept dependency and present methods to construct a “concept graph” automatically from a text corpus. We perform the first human evaluation of concept dependency edges (to be published as open data), and the results verify the feasibility of automatic approaches for inferring concepts and their dependency relations. This result can support search capabilities that may be tuned to help users learn a subject rather than retrieve documents based on a single query.", "title": "" }, { "docid": "f296b374b635de4f4c6fc9c6f415bf3e", "text": "People increasingly use the Internet for obtaining information regarding diseases, diagnoses and available treatments. Currently, many online health portals already provide non-personalized health information in the form of articles. However, it can be challenging to find information relevant to one's condition, interpret this in context, and understand the medical terms and relationships. Recommender Systems (RS) already help these systems perform precise information filtering. In this short paper, we look one step ahead and show the progress made towards RS helping users find personalized, complex medical interventions or support them with preventive healthcare measures. We identify key challenges that need to be addressed for RS to offer the kind of decision support needed in high-risk domains like healthcare.", "title": "" }, { "docid": "19361b2d5e096f26e650b25b745e5483", "text": "Multispectral pedestrian detection has attracted increasing attention from the research community due to its crucial competence for many around-the-clock applications (e.g., video surveillance and autonomous driving), especially under insufficient illumination conditions. We create a human baseline over the KAIST dataset and reveal that there is still a large gap between current top detectors and human performance. To narrow this gap, we propose a network fusion architecture, which consists of a multispectral proposal network to generate pedestrian proposals, and a subsequent multispectral classification network to distinguish pedestrian instances from hard negatives. The unified network is learned by jointly optimizing pedestrian detection and semantic segmentation tasks. The final detections are obtained by integrating the outputs from different modalities as well as the two stages. The approach significantly outperforms state-of-the-art methods on the KAIST dataset while remain fast. Additionally, we contribute a sanitized version of training annotations for the KAIST dataset, and examine the effects caused by different kinds of annotation errors. Future research of this problem will benefit from the sanitized version which eliminates the interference of annotation errors.", "title": "" }, { "docid": "04953f3a55a77b9a35e7cea663c6387e", "text": "-This paper presents a calibration procedure for a fish-eye lens (a high-distortion lens) mounted on a CCD TV camera. The method is designed to account for the differences in images acquired via a distortion-free lens camera setup and the images obtained by a fish-eye lens camera. The calibration procedure essentially defines a mapping between points in the world coordinate system and their corresponding point locations in the image plane. This step is important for applications in computer vision which involve quantitative measurements. The objective of this mapping is to estimate the internal parameters of the camera, including the effective focal length, one-pixel width on the image plane, image distortion center, and distortion coefficients. The number of parameters to be calibrated is reduced by using a calibration pattern with equally spaced dots and assuming a pin-hole model camera behavior for the image center, thus assuming negligible distortion at the image distortion center. Our method employs a non-finear transformation between points in the world coordinate system and their corresponding location on the image plane. A Lagrangian minimization method is used to determine the coefficients of the transformation. The validity and effectiveness of our calibration and distortion correction procedure are confirmed by application of this procedure on real images. Copyright © 1996 Pattern Recognition Society. Published by Elsevier Science Ltd. Camera calibration Lens distortion Intrinsic camera parameters Fish-eye lens Optimization", "title": "" }, { "docid": "e294a94b03a2bd958def360a7bce2a46", "text": "The seismic loss estimation is greatly influenced by the identification of the failure mechanism and distribution of the structures. In case of infilled structures, the final failure mechanism greatly differs to that expected during the design and the analysis stages. This is mainly due to the resultant composite behaviour of the frame and the infill panel, which makes the failure assessment and consequently the loss estimation a challenge. In this study, a numerical investigation has been conducted on the influence of masonry infilled panels on physical structural damages and the associated economic losses, under seismic excitation. The selected index buildings have been simulated following real case typical mid-rise masonry infilled steel frame structures. A realistic simulation of construction details, such as variation of infill material properties, type of connections and built quality have been implemented in the models. The fragility functions have been derived for each model using the outcomes obtained from incremental dynamic analysis (IDA). Moreover, by considering different cases of building distribution, the losses have been estimated following an intensity-based assessment approach. The results indicate that the presence of infill panel have a noticeable influence on the vulnerability of the structure and should not be ignored in loss estimations.", "title": "" }, { "docid": "19fe7a55a8ad6f206efc27ef7ff16324", "text": "Vehicular adhoc networks (VANETs) are relegated as a subgroup of Mobile adhoc networks (MANETs), with the incorporation of its principles. In VANET the moving nodes are vehicles which are self-administrated, not bounded and are free to move and organize themselves in the network. VANET possess the potential of improving safety on roads by broadcasting information associated with the road conditions. This results in generation of the redundant information been disseminated by vehicles. Thus bandwidth issue becomes a major concern. In this paper, Location based data aggregation technique is been proposed for aggregating congestion related data from the road areas through which vehicles travelled. It also takes into account scheduling mechanism at the road side units (RSUs) for treating individual vehicles arriving in its range on the basis of first-cum-first order. The basic idea behind this work is to effectually disseminate the aggregation information related to congestion to RSUs as well as to the vehicles in the network. The Simulation results show that the proposed technique performs well with the network load evaluation parameters.", "title": "" }, { "docid": "c0ef15616ba357cb522b828e03a5298c", "text": "This paper introduces the compact genetic algorithm (cGA) which represents the population as a probability distribution over the set of solutions and is operationally equivalent to the order-one behavior of the simple GA with uniform crossover. It processes each gene independently and requires less memory than the simple GA. The development of the compact GA is guided by a proper understanding of the role of the GA’s parameters and operators. The paper clearly illustrates the mapping of the simple GA’s parameters into those of an equivalent compact GA. Computer simulations compare both algorithms in terms of solution quality and speed. Finally, this work raises important questions about the use of information in a genetic algorithm, and its ramifications show us a direction that can lead to the design of more efficient GA’s.", "title": "" }, { "docid": "4f9b168efee2348f0f02f2480f9f449f", "text": "Transcutaneous neuromuscular electrical stimulation applied in clinical settings is currently characterized by a wide heterogeneity of stimulation protocols and modalities. Practitioners usually refer to anatomic charts (often provided with the user manuals of commercially available stimulators) for electrode positioning, which may lead to inconsistent outcomes, poor tolerance by the patients, and adverse reactions. Recent evidence has highlighted the crucial importance of stimulating over the muscle motor points to improve the effectiveness of neuromuscular electrical stimulation. Nevertheless, the correct electrophysiological definition of muscle motor point and its practical significance are not always fully comprehended by therapists and researchers in the field. The commentary describes a straightforward and quick electrophysiological procedure for muscle motor point identification. It consists in muscle surface mapping by using a stimulation pen-electrode and it is aimed at identifying the skin area above the muscle where the motor threshold is the lowest for a given electrical input, that is the skin area most responsive to electrical stimulation. After the motor point mapping procedure, a proper placement of the stimulation electrode(s) allows neuromuscular electrical stimulation to maximize the evoked tension, while minimizing the dose of the injected current and the level of discomfort. If routinely applied, we expect this procedure to improve both stimulation effectiveness and patient adherence to the treatment. The aims of this clinical commentary are to present an optimized procedure for the application of neuromuscular electrical stimulation and to highlight the clinical implications related to its use.", "title": "" }, { "docid": "3d62d442398bfa8c1ffb9dcf4e05c5ce", "text": "With the explosion of Web 2.0 application such as blogs, social and professional networks, and various other types of social media, the rich online information and various new sources of knowledge flood users and hence pose a great challenge in terms of information overload. It is critical to use intelligent agent software systems to assist users in finding the right information from an abundance of Web data. Recommender systems can help users deal with information overload problem efficiently by suggesting items (e.g., information and products) that match users’ personal interests. The recommender technology has been successfully employed in many applications such as recommending films, music, books, etc. The purpose of this report is to give an overview of existing technologies for building personalized recommender systems in social networking environment, to propose a research direction for addressing user profiling and cold start problems by exploiting user-generated content newly available in Web 2.0.", "title": "" }, { "docid": "c39295b4334a22547b2c4370ef329a7c", "text": "In this paper, we propose a Mobile Edge Internet of Things (MEIoT) architecture by leveraging the fiber-wireless access technology, the cloudlet concept, and the software defined networking framework. The MEIoT architecture brings computing and storage resources close to Internet of Things (IoT) devices in order to speed up IoT data sharing and analytics. Specifically, the IoT devices (belonging to the same user) are associated to a specific proxy Virtual Machine (VM) in the nearby cloudlet. The proxy VM stores and analyzes the IoT data (generated by its IoT devices) in realtime. Moreover, we introduce the semantic and social IoT technology in the context of MEIoT to solve the interoperability and inefficient access control problem in the IoT system. In addition, we propose two dynamic proxy VM migration methods to minimize the end-to-end delay between proxy VMs and their IoT devices and to minimize the total on-grid energy consumption of the cloudlets, respectively. Performance of the proposed methods is validated via extensive simulations. key words: Internet of Things, mobile edge computing, cloudlet, semantics, social network, green energy.", "title": "" }, { "docid": "e38de0af51d80544e4df84d36a40eb7b", "text": "In the cerebral cortex, the activity levels of neuronal populations are continuously fluctuating. When neuronal activity, as measured using functional MRI (fMRI), is temporally coherent across 2 populations, those populations are said to be functionally connected. Functional connectivity has previously been shown to correlate with structural (anatomical) connectivity patterns at an aggregate level. In the present study we investigate, with the aid of computational modeling, whether systems-level properties of functional networks—including their spatial statistics and their persistence across time—can be accounted for by properties of the underlying anatomical network. We measured resting state functional connectivity (using fMRI) and structural connectivity (using diffusion spectrum imaging tractography) in the same individuals at high resolution. Structural connectivity then provided the couplings for a model of macroscopic cortical dynamics. In both model and data, we observed (i) that strong functional connections commonly exist between regions with no direct structural connection, rendering the inference of structural connectivity from functional connectivity impractical; (ii) that indirect connections and interregional distance accounted for some of the variance in functional connectivity that was unexplained by direct structural connectivity; and (iii) that resting-state functional connectivity exhibits variability within and across both scanning sessions and model runs. These empirical and modeling results demonstrate that although resting state functional connectivity is variable and is frequently present between regions without direct structural linkage, its strength, persistence, and spatial statistics are nevertheless constrained by the large-scale anatomical structure of the human cerebral cortex.", "title": "" }, { "docid": "17d0da8dd05d5cfb79a5f4de4449fcdd", "text": "PUBLISHING Thousands of scientists start year without journal access p.13 2017 SNEAK PEEK What the new year holds for science p.14 ECOLOGY What is causing the deaths of so many shorebirds? p.16 PHYSICS Quantum computers ready to leap out of the lab The race is on to turn scientific curiosities into working machines. A front runner in the pursuit of quantum computing uses single ions trapped in a vacuum. Q uantum computing has long seemed like one of those technologies that are 20 years away, and always will be. But 2017 could be the year that the field sheds its research-only image. Computing giants Google and Microsoft recently hired a host of leading lights, and have set challenging goals for this year. Their ambition reflects a broader transition taking place at start-ups and academic research labs alike: to move from pure science towards engineering. \" People are really building things, \" says Christopher Monroe, a physicist at the University of Maryland in College Park who co-founded the start-up IonQ in 2015. \" I've never seen anything like that. It's no longer just research. \" Google started working on a form of quantum computing that harnesses super-conductivity in 2014. It hopes this year, or shortly after, to perform a computation that is beyond even the most powerful 'classical' supercomputers — an elusive milestone known as quantum supremacy. Its rival, Microsoft, is betting on an intriguing but unproven concept, topological quantum computing, and hopes to perform a first demonstration of the technology. The quantum-computing start-up scene is also heating up. Monroe plans to begin hiring in earnest this year. Physicist Robert Schoelkopf at Yale University in New Haven, Connecticut, who co-founded the start-up Quantum Circuits, and former IBM applied physicist Chad Rigetti, who set up Rigetti in", "title": "" }, { "docid": "1e50abe2821e6dad2e8ede1a163e8cc8", "text": "In vitro dissolution/release tests are an important tool in the drug product development phase as well as in its quality control and the regulatory approval process. Mucosal drug delivery systems are aimed to provide both local and systemic drug action via mucosal surfaces of the body and exhibit significant differences in formulation design, as well as in their physicochemical and release characteristics. Therefore it is not possible to devise a single test system which would be suitable for release testing of such complex dosage forms. This article is aimed to provide a comprehensive review of both compendial and noncompendial methods used for in vitro dissolution/release testing of novel mucosal drug delivery systems aimed for ocular, nasal, oromucosal, vaginal and rectal administration.", "title": "" }, { "docid": "30e798ef3668df14f1625d40c53011a0", "text": "Classification with big data has become one of the latest trends when talking about learning from the available information. The data growth in the last years has rocketed the interest in effectively acquiring knowledge to analyze and predict trends. The variety and veracity that are related to big data introduce a degree of uncertainty that has to be handled in addition to the volume and velocity requirements. This data usually also presents what is known as the problem of classification with imbalanced datasets, a class distribution where the most important concepts to be learned are presented by a negligible number of examples in relation to the number of examples from the other classes. In order to adequately deal with imbalanced big data we propose the Chi-FRBCS-BigDataCS algorithm, a fuzzy rule based classification system that is able to deal with the uncertainly that is introduced in large volumes of data without disregarding the learning in the underrepresented class. The method uses the MapReduce framework to distribute the computational operations of the fuzzy model while it includes cost-sensitive learning techniques in its design to address the imbalance that is present in the data. The good performance of this approach is supported by the experimental analysis that is carried out over twenty-four imbalanced big data cases of study. The results obtained show that the proposal is able to handle these problems obtaining competitive results both in the classification performance of the model and the time needed for the computation. © 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "425d927136ad3fc0f967ea8e64d8f209", "text": "UNLABELLED\nThere is a clear need for brief, but sensitive and specific, cognitive screening instruments as evidenced by the popularity of the Addenbrooke's Cognitive Examination (ACE).\n\n\nOBJECTIVES\nWe aimed to validate an improved revision (the ACE-R) which incorporates five sub-domain scores (orientation/attention, memory, verbal fluency, language and visuo-spatial).\n\n\nMETHODS\nStandard tests for evaluating dementia screening tests were applied. A total of 241 subjects participated in this study (Alzheimer's disease=67, frontotemporal dementia=55, dementia of Lewy Bodies=20; mild cognitive impairment-MCI=36; controls=63).\n\n\nRESULTS\nReliability of the ACE-R was very good (alpha coefficient=0.8). Correlation with the Clinical Dementia Scale was significant (r=-0.321, p<0.001). Two cut-offs were defined (88: sensitivity=0.94, specificity=0.89; 82: sensitivity=0.84, specificity=1.0). Likelihood ratios of dementia were generated for scores between 88 and 82: at a cut-off of 82 the likelihood of dementia is 100:1. A comparison of individual age and education matched groups of MCI, AD and controls placed the MCI group performance between controls and AD and revealed MCI patients to be impaired in areas other than memory (attention/orientation, verbal fluency and language).\n\n\nCONCLUSIONS\nThe ACE-R accomplishes standards of a valid dementia screening test, sensitive to early cognitive dysfunction.", "title": "" }, { "docid": "422183692a08138189271d4d7af407c7", "text": "Scene flow describes the motion of 3D objects in real world and potentially could be the basis of a good feature for 3D action recognition. However, its use for action recognition, especially in the context of convolutional neural networks (ConvNets), has not been previously studied. In this paper, we propose the extraction and use of scene flow for action recognition from RGB-D data. Previous works have considered the depth and RGB modalities as separate channels and extract features for later fusion. We take a different approach and consider the modalities as one entity, thus allowing feature extraction for action recognition at the beginning. Two key questions about the use of scene flow for action recognition are addressed: how to organize the scene flow vectors and how to represent the long term dynamics of videos based on scene flow. In order to calculate the scene flow correctly on the available datasets, we propose an effective self-calibration method to align the RGB and depth data spatially without knowledge of the camera parameters. Based on the scene flow vectors, we propose a new representation, namely, Scene Flow to Action Map (SFAM), that describes several long term spatio-temporal dynamics for action recognition. We adopt a channel transform kernel to transform the scene flow vectors to an optimal color space analogous to RGB. This transformation takes better advantage of the trained ConvNets models over ImageNet. Experimental results indicate that this new representation can surpass the performance of state-of-the-art methods on two large public datasets.", "title": "" } ]
scidocsrr
0a3821d73c8805b502ab04ce7ce64f59
Wind turbine power tracking using an improved multimodel quadratic approach.
[ { "docid": "c734c98b1ca8261694386c537870c2f3", "text": "Uncontrolled wind turbine configuration, such as stall-regulation captures, energy relative to the amount of wind speed. This configuration requires constant turbine speed because the generator that is being directly coupled is also connected to a fixed-frequency utility grid. In extremely strong wind conditions, only a fraction of available energy is captured. Plants designed with such a configuration are economically unfeasible to run in these circumstances. Thus, wind turbines operating at variable speed are better alternatives. This paper focuses on a controller design methodology applied to a variable-speed, horizontal axis wind turbine. A simple but rigid wind turbine model was used and linearised to some operating points to meet the desired objectives. By using blade pitch control, the deviation of the actual rotor speed from a reference value is minimised. The performances of PI and PID controllers were compared relative to a step wind disturbance. Results show comparative responses between these two controllers. The paper also concludes that with the present methodology, despite the erratic wind data, the wind turbine still manages to operate most of the time at 88% in the stable region.", "title": "" } ]
[ { "docid": "75fda2fa6c35c915dede699c12f45d84", "text": "This work presents an open-source framework called systemc-clang for analyzing SystemC models that consist of a mixture of register-transfer level, and transaction-level components. The framework statically parses mixed-abstraction SystemC models, and represents them using an intermediate representation. This intermediate representation captures the structural information about the model, and certain behavioural semantics of the processes in the model. This representation can be used for multiple purposes such as static analysis of the model, code transformations, and optimizations. We describe with examples, the key details in implementing systemc-clang, and show an example of constructing a plugin that analyzes the intermediate representation to discover opportunities for parallel execution of SystemC processes. We also experimentally evaluate the capabilities of this framework with a subset of examples from the SystemC distribution including register-transfer, and transaction-level models.", "title": "" }, { "docid": "81f504c4e378d0952231565d3ba4c555", "text": "The alignment problem—establishing links between corresponding phrases in two related sentences—is as important in natural language inference (NLI) as it is in machine translation (MT). But the tools and techniques of MT alignment do not readily transfer to NLI, where one cannot assume semantic equivalence, and for which large volumes of bitext are lacking. We present a new NLI aligner, the MANLI system, designed to address these challenges. It uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data. We compare the performance of MANLI to existing NLI and MT aligners on an NLI alignment task over the well-known Recognizing Textual Entailment data. We show that MANLI significantly outperforms existing aligners, achieving gains of 6.2% in F1 over a representative NLI aligner and 10.5% over GIZA++.", "title": "" }, { "docid": "d05e4998114dd485a3027f2809277512", "text": "Although neural tensor networks (NTNs) have been successful in many natural language processing tasks, they require a large number of parameters to be estimated, which often results in overfitting and long training times. We address these issues by applying eigendecomposition to each slice matrix of a tensor to reduce the number of parameters. We evaluate our proposed NTN models in two tasks. First, the proposed models are evaluated in a knowledge graph completion task. Second, a recursive NTN (RNTN) extension of the proposed models is evaluated on a logical reasoning task. The experimental results show that our proposed models learn better and faster than the original (R)NTNs.", "title": "" }, { "docid": "9b35733a48462f45639625daac540a2f", "text": "Recommender systems provide strategies that help users search or make decisions within the overwhelming information spaces nowadays. They have played an important role in various areas such as e-commerce and e-learning. In this paper, we propose a hybrid recommendation strategy of content-based and knowledge-based methods that are flexible for any field to apply. By analyzing the past rating records of every user, the system learns the user’s preferences. After acquiring users’ preferences, the semantic search-and-discovery procedure takes place starting from a highly rated item. For every found item, the system evaluates the Interest Intensity indicating to what degree the user might like it. Recommender systems train a personalized estimating module using a genetic algorithm for each user, and the personalized estimating model helps improve the precision of the estimated scores. With the recommendation strategies and personalization strategies, users may have better recommendations that are closer to their preferences. In the latter part of this paper, a realworld case, a movie-recommender system adopting proposed recommendation strategies, is implemented.", "title": "" }, { "docid": "8b6116105914e3d912d4594b875e443b", "text": "Patients with neuropathic pain (NP) are challenging to manage and evidence-based clinical recommendations for pharmacologic management are needed. Systematic literature reviews, randomized clinical trials, and existing guidelines were evaluated at a consensus meeting. Medications were considered for recommendation if their efficacy was supported by at least one methodologically-sound, randomized clinical trial (RCT) demonstrating superiority to placebo or a relevant comparison treatment. Recommendations were based on the amount and consistency of evidence, degree of efficacy, safety, and clinical experience of the authors. Available RCTs typically evaluated chronic NP of moderate to severe intensity. Recommended first-line treatments include certain antidepressants (i.e., tricyclic antidepressants and dual reuptake inhibitors of both serotonin and norepinephrine), calcium channel alpha2-delta ligands (i.e., gabapentin and pregabalin), and topical lidocaine. Opioid analgesics and tramadol are recommended as generally second-line treatments that can be considered for first-line use in select clinical circumstances. Other medications that would generally be used as third-line treatments but that could also be used as second-line treatments in some circumstances include certain antiepileptic and antidepressant medications, mexiletine, N-methyl-D-aspartate receptor antagonists, and topical capsaicin. Medication selection should be individualized, considering side effects, potential beneficial or deleterious effects on comorbidities, and whether prompt onset of pain relief is necessary. To date, no medications have demonstrated efficacy in lumbosacral radiculopathy, which is probably the most common type of NP. Long-term studies, head-to-head comparisons between medications, studies involving combinations of medications, and RCTs examining treatment of central NP are lacking and should be a priority for future research.", "title": "" }, { "docid": "e0c83197770752c9fdfe5e51edcd3d46", "text": "In the last decade, it has become obvious that Alzheimer's disease (AD) is closely linked to changes in lipids or lipid metabolism. One of the main pathological hallmarks of AD is amyloid-β (Aβ) deposition. Aβ is derived from sequential proteolytic processing of the amyloid precursor protein (APP). Interestingly, both, the APP and all APP secretases are transmembrane proteins that cleave APP close to and in the lipid bilayer. Moreover, apoE4 has been identified as the most prevalent genetic risk factor for AD. ApoE is the main lipoprotein in the brain, which has an abundant role in the transport of lipids and brain lipid metabolism. Several lipidomic approaches revealed changes in the lipid levels of cerebrospinal fluid or in post mortem AD brains. Here, we review the impact of apoE and lipids in AD, focusing on the major brain lipid classes, sphingomyelin, plasmalogens, gangliosides, sulfatides, DHA, and EPA, as well as on lipid signaling molecules, like ceramide and sphingosine-1-phosphate. As nutritional approaches showed limited beneficial effects in clinical studies, the opportunities of combining different supplements in multi-nutritional approaches are discussed and summarized.", "title": "" }, { "docid": "88804c0fb16e507007983108811950dc", "text": "We propose a neural probabilistic structured-prediction method for transition-based natural language processing, which integrates beam search and contrastive learning. The method uses a global optimization model, which can leverage arbitrary features over nonlocal context. Beam search is used for efficient heuristic decoding, and contrastive learning is performed for adjusting the model according to search errors. When evaluated on both chunking and dependency parsing tasks, the proposed method achieves significant accuracy improvements over the locally normalized greedy baseline on the two tasks, respectively.", "title": "" }, { "docid": "fff89d9e97dbb5a13febe48c35d08c94", "text": "The positive effects of social popularity (i.e., information based on other consumers’ behaviors) and deal scarcity (i.e., information provided by product vendors) on consumers’ consumption behaviors are well recognized. However, few studies have investigated their potential joint and interaction effects and how such effects may differ at different timing of a shopping process. This study examines the individual and interaction effects of social popularity and deal scarcity as well as how such effects change as consumers’ shopping goals become more concrete. The results of a laboratory experiment show that in the initial shopping stage when consumers do not have specific shopping goals, social popularity and deal scarcity information weaken each other’s effects; whereas in the later shopping stage when consumers have constructed concrete shopping goals, these two information cues reinforce each other’s effects. Implications on theory and practice are discussed.", "title": "" }, { "docid": "0a78c9305d4b5584e87327ba2236d302", "text": "This paper presents GeoS, a new algorithm for the efficient segmentation of n-dimensional image and video data. The segmentation problem is cast as approximate energy minimization in a conditional random field. A new, parallel filtering operator built upon efficient geodesic distance computation is used to propose a set of spatially smooth, contrast-sensitive segmentation hypotheses. An economical search algorithm finds the solution with minimum energy within a sensible and highly restricted subset of all possible labellings. Advantages include: i) computational efficiency with high segmentation accuracy; ii) the ability to estimate an approximation to the posterior over segmentations; iii) the ability to handle generally complex energy models. Comparison with max-flow indicates up to 60 times greater computational efficiency as well as greater memory efficiency. GeoS is validated quantitatively and qualitatively by thorough comparative experiments on existing and novel ground-truth data. Numerous results on interactive and automatic segmentation of photographs, video and volumetric medical image data are presented.", "title": "" }, { "docid": "125513cbb52c4ef868988a3060070d95", "text": "In this paper, we propose a new algorithm using spherical symmetric three dimensional local ternary patterns (SS-3D-LTP) for natural, texture and biomedical image retrieval applications. The existing local binary patterns (LBP), local ternary patterns (LTP), local derivative patterns (LDP), local tetra patterns (LTrP) etc., are encode the relationship between the center pixel and its surrounding neighbors in two dimensional (2D) local region of an image. The proposed method encodes the relationship between the center pixel and its surrounding neighbors with five selected directions in 3D plane which is generated from 2D image using multiresolution Gaussian filter bank. In addition, we propose the color SS-3D-LTP (CSS-3D-LTP) where we consider the RGB spaces as three planes of 3D volume. Three experiments have been carried out for proving the worth of our algorithm for natural, texture and biomedical image retrieval applications. It is further mentioned that the databases used for natural, texture and biomedical image retrieval applications are Corel-10K, Brodatz and open access series of imaging studies (OASIS) magnetic resonance databases respectively. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to the start-of-art spatial as well as transform domain techniques on respective databases. & 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "41eddeb86d561882b85895277cbd38e9", "text": "With the rapid growth of data traffic in data centers, data rates over 50Gb/s/signal (e.g., OIF-CEI-56G-VSR) will eventually be required in wireline chip-to-module or chip-to-chip communications [1-3]. To achieve better power efficiency than that of existing 25Gb/s/signal designs, a high-speed yet energy-efficient front-end is needed in both the transmitter and receiver. A receiver front-end with baud-rate architecture [1] has been successfully operated at 56Gb/s, but additional components such as eye-monitoring comparators, phase detectors, and clock recovery circuitry as well as a power-efficient transmitter are needed to build a complete transceiver.", "title": "" }, { "docid": "f6121f69419a074b657bb4a0324bae4a", "text": "Latent Dirichlet allocation (LDA) is a popular topic modeling technique for exploring hidden topics in text corpora. Increasingly, topic modeling needs to scale to larger topic spaces and use richer forms of prior knowledge, such as word correlations or document labels. However, inference is cumbersome for LDA models with prior knowledge. As a result, LDA models that use prior knowledge only work in small-scale scenarios. In this work, we propose a factor graph framework, Sparse Constrained LDA (SC-LDA), for efficiently incorporating prior knowledge into LDA. We evaluate SC-LDA’s ability to incorporate word correlation knowledge and document label knowledge on three benchmark datasets. Compared to several baseline methods, SC-LDA achieves comparable performance but is significantly faster. 1 Challenge: Leveraging Prior Knowledge in Large-scale Topic Models Topic models, such as Latent Dirichlet Allocation (Blei et al., 2003, LDA), have been successfully used for discovering hidden topics in text collections. LDA is an unsupervised model—it requires no annotation—and discovers, without any supervision, the thematic trends in a text collection. However, LDA’s lack of supervision can lead to disappointing results. Often, the hidden topics learned by LDA fail to make sense to end users. Part of the problem is that the objective function of topic models does not always correlate with human judgments of topic quality (Chang et al., 2009). Therefore, it’s often necessary to incorporate prior knowledge into topic models to improve the model’s performance. Recent work has also shown that by interactive human feedback can improve the quality and stability of topics (Hu and Boyd-Graber, 2012; Yang et al., 2015). Information about documents (Ramage et al., 2009) or words (Boyd-Graber et al., 2007) can improve LDA’s topics. In addition to its occasional inscrutability, scalability can also hamper LDA’s adoption. Conventional Gibbs sampling—the most widely used inference for LDA—scales linearly with the number of topics. Moreover, accurate training usually takes many sampling passes over the dataset. Therefore, for large datasets with millions or even billions of tokens, conventional Gibbs sampling takes too long to finish. For standard LDA, recently introduced fast sampling methods (Yao et al., 2009; Li et al., 2014; Yuan et al., 2015) enable industrial applications of topic modeling to search engines and online advertising, where capturing the “long tail” of infrequently used topics requires large topic spaces. For example, while typical LDA models in academic papers have up to 103 topics, industrial applications with 105–106 topics are common (Wang et al., 2014). Moreover, scaling topic models to many topics can also reveal the hierarchical structure of topics (Downey et al., 2015). Thus, there is a need for topic models that can both benefit from rich prior information and that can scale to large datasets. However, existing methods for improving scalability focus on topic models without prior information. To rectify this, we propose a factor graph model that encodes a potential function over the hidden topic variables, encouraging topics consistent with prior knowledge. The factor model representation admits an efficient sampling algorithm that takes advantage of the model’s sparsity. We show that our method achieves comparable performance but runs significantly faster than baseline methods, enabling models to discover models with many topics enriched by prior knowledge. 2 Efficient Algorithm for Incorporating Knowledge into LDA In this section, we introduce the factor model for incorporating prior knowledge and show how to efficiently use Gibbs sampling for inference. 2.1 Background: LDA and SparseLDA A statistical topic model represents words in documents in a collection D as mixtures of T topics, which are multinomials over a vocabulary of size V . In LDA, each document d is associated with a multinomial distribution over topics, θd. The probability of a word type w given topic z is φw|z . The multinomial distributions θd and φz are drawn from Dirichlet distributions: α and β are the hyperparameters for θ and φ. We represent the document collection D as a sequence of words w, and topic assignments as z. We use symmetric priors α and β in the model and experiment, but asymmetric priors are easily encoded in the models (Wallach et al., 2009). Discovering the latent topic assignments z from observed words w requires inferring the the posterior distribution P (z|w). Griffiths and Steyvers (2004) propose using collapsed Gibbs sampling. The probability of a topic assignment z = t in document d given an observed word type w and the other topic assignments z− is P (z = t|z−, w) ∝ (nd,t + α) nw,t + β", "title": "" }, { "docid": "812c1713c1405c4925c6c6057624465b", "text": "Fuel cell hybrid tramway has gained increasing attention recently and energy management strategy (EMS) is one of its key technologies. A hybrid tramway power system consisting of proton exchange membrane fuel cell (PEMFC) and battery is designed in the MATLAB /SIMULINK software as basic for the energy management strategy research. An equivalent consumption minimization strategy (ECMS) for hybrid tramway is proposed and embedded into the aforementioned hybrid model. In order to evaluate the proposed energy management, a real tramway driving cycle is adopted to simulate in RT-LAB platform. The simulation results prove the effectiveness of the proposed EMS.", "title": "" }, { "docid": "875e12852dabbcabe24cc59b764a4226", "text": "As more and more marketers incorporate social media as an integral part of the promotional mix, rigorous investigation of the determinants that impact consumers’ engagement in eWOM via social networks is becoming critical. Given the social and communal characteristics of social networking sites (SNSs) such as Facebook, MySpace and Friendster, this study examines how social relationship factors relate to eWOM transmitted via online social websites. Specifically, a conceptual model that identifies tie strength, homophily, trust, normative and informational interpersonal influence as an important antecedent to eWOM behaviour in SNSs was developed and tested. The results confirm that tie strength, trust, normative and informational influence are positively associated with users’ overall eWOM behaviour, whereas a negative relationship was found with regard to homophily. This study suggests that product-focused eWOM in SNSs is a unique phenomenon with important social implications. The implications for researchers, practitioners and policy makers of social media regulation are discussed.", "title": "" }, { "docid": "4a098609770618240fbaebbbc891883d", "text": "We present CHARAGRAM embeddings, a simple approach for learning character-based compositional models to embed textual sequences. A word or sentence is represented using a character n-gram count vector, followed by a single nonlinear transformation to yield a low-dimensional embedding. We use three tasks for evaluation: word similarity, sentence similarity, and part-of-speech tagging. We demonstrate that CHARAGRAM embeddings outperform more complex architectures based on character-level recurrent and convolutional neural networks, achieving new state-of-the-art performance on several similarity tasks. 1", "title": "" }, { "docid": "c1eefd9a127a0ea9c7e43fdfbdba689e", "text": "We present TRANX, a transition-based neural semantic parser that maps natural language (NL) utterances into formal meaning representations (MRs). TRANX uses a transition system based on the abstract syntax description language for the target MR, which gives it two major advantages: (1) it is highly accurate, using information from the syntax of the target MR to constrain the output space and model the information flow, and (2) it is highly generalizable, and can easily be applied to new types of MR by just writing a new abstract syntax description corresponding to the allowable structures in the MR. Experiments on four different semantic parsing and code generation tasks show that our system is generalizable, extensible, and effective, registering strong results compared to existing neural semantic parsers.1", "title": "" }, { "docid": "9b1cf040b59dd25528b58d281e796ad9", "text": "The rapid development of Web2.0 leads to significant information redundancy. Especially for a complex news event, it is difficult to understand its general idea within a single coherent picture. A complex event often contains branches, intertwining narratives and side news which are all called storylines. In this paper, we propose a novel solution to tackle the challenging problem of storylines extraction and reconstruction. Specifically, we first investigate two requisite properties of an ideal storyline. Then a unified algorithm is devised to extract all effective storylines by optimizing these properties at the same time. Finally, we reconstruct all extracted lines and generate the high-quality story map. Experiments on real-world datasets show that our method is quite efficient and highly competitive, which can bring about quicker, clearer and deeper comprehension to readers.", "title": "" }, { "docid": "1c83ce2568af5cc3679b69282b25c35d", "text": "A useful ability for search engines is to be able to rank objects with novelty and diversity: the top k documents retrieved should cover possible intents of a query with some distribution, or should contain a diverse set of subtopics related to the user’s information need, or contain nuggets of information with little redundancy. Evaluation measures have been introduced to measure the effectiveness of systems at this task, but these measures have worst-case NP-hard computation time. The primary consequence of this is that there is no ranking principle akin to the Probability Ranking Principle for document relevance that provides uniform instruction on how to rank documents for novelty and diversity. We use simulation to investigate the practical implications of this for optimization and evaluation of retrieval systems.", "title": "" }, { "docid": "ea12fe9b91253634422471024f9d28f8", "text": "Maximum and minimum computed across channels is used to monitor the Electroencephalogram signals for possible change of the eye state. Upon detection of a possible change, the last two seconds of the signal is passed through Multivariate Empirical Mode Decomposition and relevant features are extracted. The features are then fed into Logistic Regression and Artificial Neural Network classifiers to confirm the eye state change. The proposed algorithm detects the eye state change with 88.2% accuracy in less than two seconds. This provides a valuable improvement in comparison to a recent procedure that takes about 20 minutes to classify new instances with 97.3% accuracy. The introduced algorithm is promising in the real-time eye state classification as increasing the training examples would increase its accuracy. Published by Elsevier Ltd.", "title": "" }, { "docid": "5d886e7ee0006440161112b4ac6903b4", "text": "We report on the design and development of X-RHex, a hexapedal robot with a single actuator per leg, intended for real-world mobile applications. X-RHex is an updated version of the RHex platform, designed to offer substantial improvements in power, run-time, payload size, durability, and terrain negotiation, with a smaller physical volume and a comparable footprint and weight. Furthermore, X-RHex is designed to be easier to build and maintain by using a variety of commercial off-the-shelf (COTS) components for a majority of its internals. This document describes the X-RHex architecture and design, with a particular focus on the new ability of this robot to carry modular payloads as a laboratory on legs. X-RHex supports a variety of sensor suites on a small, mobile robotic platform intended for broad, general use in research, defense, and search and rescue applications. Comparisons with previous RHex platforms are presented throughout, with preliminary tests indicating that the locomotive capabilities of X-RHex can meet or exceed the previous platforms. With the additional payload capabilities of X-RHex, we claim it to be the first robot of its size to carry a fully programmable GPU for fast, parallel sensor processing. Disciplines Electrical and Computer Engineering | Engineering | Systems Engineering", "title": "" } ]
scidocsrr
0e64848e074e909fa708e882acdc40ce
Weighted color and texture sample selection for image matting
[ { "docid": "d4aaea0107cbebd7896f4cb57fa39c05", "text": "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs", "title": "" }, { "docid": "8076620d4905b087d10ee7fba14bd2ec", "text": "Image matting aims at extracting foreground elements from an image by mean s of color and opacity (alpha) estimation. While a lot of progress has been made in recent years on improv ing the accuracy of matting techniques, one common problem persisted: the low speed of matte computation. We pre sent the first real-time matting technique for natural images and videos. Our technique is based on the obser vation that, for small neighborhoods, pixels tend to share similar attributes. Therefore, independently treating eac h pixel in the unknown regions of a trimap results in a lot of redundant work. We show how this computation can be significantly and safely reduced by means of a careful selection of pairs of background and foreground s amples. Our technique achieves speedups of up to two orders of magnitude compared to previous ones, while producin g high-quality alpha mattes. The quality of our results has been verified through an independent benchmark. The speed of our technique enables, for the first time, real-time alpha matting of videos, and has the potential to enable a n ew class of exciting applications.", "title": "" } ]
[ { "docid": "0b6ac11cb84a573e55cb75f0bc342d72", "text": "This paper develops and tests algorithms for predicting the end-to-end route of a vehicle based on GPS observations of the vehicle’s past trips. We show that a large portion of a typical driver’s trips are repeated. Our algorithms exploit this fact for prediction by matching the first part of a driver’s current trip with one of the set of previously observed trips. Rather than predicting upcoming road segments, our focus is on making long term predictions of the route. We evaluate our algorithms using a large corpus of real world GPS driving data acquired from observing over 250 drivers for an average of 15.1 days per subject. Our results show how often and how accurately we can predict a driver’s route as a function of the distance already driven.", "title": "" }, { "docid": "d58c81bf22cdad5c1a669dd9b9a77fbd", "text": "The rapid increase in healthcare demand has seen novel developments in health monitoring technologies, such as the body area networks (BAN) paradigm. BAN technology envisions a network of continuously operating sensors, which measure critical physical and physiological parameters e.g., mobility, heart rate, and glucose levels. Wireless connectivity in BAN technology is key to its success as it grants portability and flexibility to the user. While radio frequency (RF) wireless technology has been successfully deployed in most BAN implementations, they consume a lot of battery power, are susceptible to electromagnetic interference and have security issues. Intrabody communication (IBC) is an alternative wireless communication technology which uses the human body as the signal propagation medium. IBC has characteristics that could naturally address the issues with RF for BAN technology. This survey examines the on-going research in this area and highlights IBC core fundamentals, current mathematical models of the human body, IBC transceiver designs, and the remaining research challenges to be addressed. IBC has exciting prospects for making BAN technologies more practical in the future.", "title": "" }, { "docid": "8eb51537b051bbf78d87a0cd48e9d90c", "text": "One of the important techniques of Data mining is Classification. Many real world problems in various fields such as business, science, industry and medicine can be solved by using classification approach. Neural Networks have emerged as an important tool for classification. The advantages of Neural Networks helps for efficient classification of given data. In this study a Heart diseases dataset is analyzed using Neural Network approach. To increase the efficiency of the classification process parallel approach is also adopted in the training phase.", "title": "" }, { "docid": "afe4c8e46449bfa37a04e67595d4537b", "text": "Gamification is the use of game design elements in non-game settings to engage participants and encourage desired behaviors. It has been identified as a promising technique to improve students' engagement which could have a positive impact on learning. This study evaluated the learning effectiveness and engagement appeal of a gamified learning activity targeted at the learning of C-programming language. Furthermore, the study inquired into which gamified learning activities were more appealing to students. The study was conducted using the mixed-method sequential explanatory protocol. The data collected and analysed included logs, questionnaires, and pre- and post-tests. The results of the evaluation show positive effects on the engagement of students toward the gamified learning activities and a moderate improvement in learning outcomes. Students reported different motivations for continuing and stopping activities once they completed the mandatory assignment. The preferences for different gamified activities were also conditioned by academic milestones.", "title": "" }, { "docid": "9fc2d92c42400a45cb7bf6c998dc9236", "text": "This paper presents a new probabilistic model of information retrieval. The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms. This assumption is not made in well-known existing models of information retrieval, but is essential in the field of statistical natural language processing. Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf×idf term weighting. The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking. A pilot experiment on the TREC collection shows that the linguistically motivated weighting algorithm outperforms the popular BM25 weighting algorithm.", "title": "" }, { "docid": "40f2565bd4b167954450c050ac3a9fd7", "text": "No-limit Texas hold’em is the most popular form of poker. Despite artificial intelligence (AI) successes in perfect-information games, the private information and massive game tree have made no-limit poker difficult to tackle. We present Libratus, an AI that, in a 120,000-hand competition, defeated four top human specialist professionals in heads-up no-limit Texas hold’em, the leading benchmark and long-standing challenge problem in imperfect-information game solving. Our game-theoretic approach features application-independent techniques: an algorithm for computing a blueprint for the overall strategy, an algorithm that fleshes out the details of the strategy for subgames that are reached during play, and a self-improver algorithm that fixes potential weaknesses that opponents have identified in the blueprint strategy.", "title": "" }, { "docid": "2d6c085f30847fe3745e0a8d7d93ea9c", "text": "Deep gated convolutional networks have been proved to be very effective in single channel speech separation. However current state-of-the-art framework often considers training the gated convolutional networks in time-frequency (TF) domain. Such an approach will result in limited perceptual score, such as signal-to-distortion ratio (SDR) upper bound of separated utterances and also fail to exploit an end-to-end framework. In this paper we present an integrated simple and effective end-to-end approach to monaural speech separation, which consists of deep gated convolutional neural networks (GCNN) that takes the mixed utterance of two speakers and maps it to two separated utterances, where each utterance contains only one speaker’s voice. In addition long shortterm memory (LSTM) is employed for long term temporal modeling. For the objective, we propose to train the network by directly optimizing utterance level SDR in a permutation invariant training (PIT) style. Our experiments on the public WSJ0-2mix data corpus demonstrate that this new scheme can produce more discriminative separated utterances and leading to performance improvement on the speaker separation task.", "title": "" }, { "docid": "9b06bfb67641fa009e51e1077b7a2434", "text": "This paper presents the results of an exploratory study carried out to learn about the use and impact of Information and Communication Technologies (ICT) on Small and Medium Sized Enterprises (SMEs) in Oman. The study investigates ICT infrastructure, software used, driver for ICT investment, perceptions about business benefits of ICT and outsourcing trends of SMEs. The study provides an insight on the barriers for the adoption of ICT. Data on these aspects of ICT was collected from 51 SMEs through a survey instrument. The results of the study show that only a small number of SMEs in Oman are aware of the benefits of ICT adoption. The main driving forces for ICT investment are to provide better and faster customer service and to stay ahead of the competition. A majority of surveyed SMEs have reported a positive performance and other benefits by utilizing ICT in their businesses. Majority of SMEs outsource most of their ICT activities. Lack of internal capabilities, high cost of ICT and lack of information about suitable ICT solutions and implementation were some of the major barriers in adopting ICT. These findings are consistent with other studies e.g. (Harindranath et al 2008). There is a need for more focus and concerted efforts on increasing awareness among SMEs on the benefits of ICT adoption. The results of the study recognize the need for more training facilities in ICT for SMEs, measures to provide ICT products and services at an affordable cost, and availability of free professional advice and consulting at reasonable cost to SMEs. Our findings therefore have important implication for policy aimed at ICT adoption and use by SMEs. The findings of this research will provide a foundation for future research and will help policy makers in understanding the current state of affairs of the usage and impact of ICT on SMEs in Oman.", "title": "" }, { "docid": "9faf87e51078bb92f146ba4d31f04c7f", "text": "This paper first describes the problem of goals nonreachable with obstacles nearby when using potential field methods for mobile robot path planning. Then, new repulsive potential functions are presented by taking the relative distance between the robot and the goal into consideration, which ensures that the goal position is the global minimum of the total potential.", "title": "" }, { "docid": "cfe31ce3a6a23d9148709de6032bd90b", "text": "I argue that Non-Photorealistic Rendering (NPR) research will play a key role in the scientific understanding of visual art and illustration. NPR can contribute to scientific understanding of two kinds of problems: how do artists create imagery, and how do observers respond to artistic imagery? I sketch out some of the open problems, how NPR can help, and what some possible theories might look like. Additionally, I discuss the thorny problem of how to evaluate NPR research and theories.", "title": "" }, { "docid": "ae937be677ca7c0714bde707816171ff", "text": "The authors examined how time orientation and morningness-eveningness relate to 2 forms of procrastination: indecision and avoidant forms. Participants were 509 adults (M age = 49.78 years, SD = 6.14) who completed measures of time orientation, morningness-eveningness, decisional procrastination (i.e., indecision), and avoidant procrastination. Results showed that morningness was negatively related to avoidant procrastination but not decisional procrastination. Overall, the results indicated different temporal profiles for indecision and avoidant procrastinations. Avoidant procrastination related to low future time orientation and low morningness, whereas indecision related to both (a) high negative and high positive past orientations and (b) low present-hedonistic and low future time orientations. The authors inferred that distinct forms of procrastination seem different on the basis of dimensions of time.", "title": "" }, { "docid": "d8d86da66ebeaae73e9aaa2a30f18bb5", "text": "In this paper, a novel approach to the characterization of structural damage in civil structures is presented. Structural damage often results in subtle changes to structural stiffness and damping properties that are manifested by changes in the location of transfer function characteristic equation roots (poles) upon the complex plane. Using structural response time-history data collected from an instrumented structure, transfer function poles can be estimated using traditional system identification methods. Comparing the location of poles corresponding to the structure in an unknown structural state to those of the undamaged structure, damage can be accurately identified. The IASC-ASCE structural health monitoring benchmark structure is used in this study to illustrate the merits of the transfer function pole migration approach to damage detection in civil structures.", "title": "" }, { "docid": "2f362f4c9b56a44af8e93dad107e3995", "text": "Microstrip filters are widely used in microwave circuit, This paper briefly describes the design principle of microstrip bandstop filter (BSF). A compact wide band high rejection BSF is presented. This filter consists of two parts: defected ground structures filter (DGS) and spurline filter. Due to the inherently compact characteristics of the spurline and DGS, the proposed filter shows a better rejection performance than open stub BSF in the same circuit size. The results of simulation and optimization given by HFSS12 prove the correctness of the design.", "title": "" }, { "docid": "45b1cb6c9393128c9a9dcf9dbeb50778", "text": "Bitcoin, a distributed, cryptographic, digital currency, gained a lot of media attention for being an anonymous e-cash system. But as all transactions in the network are stored publicly in the blockchain, allowing anyone to inspect and analyze them, the system does not provide real anonymity but pseudonymity. There have already been studies showing the possibility to deanonymize bitcoin users based on the transaction graph and publicly available data. Furthermore, users could be tracked by bitcoin exchanges or shops, where they have to provide personal information that can then be linked to their bitcoin addresses. Special bitcoin mixing services claim to obfuscate the origin of transactions and thereby increase the anonymity of its users. In this paper we evaluate three of these services – Bitcoin Fog, BitLaundry, and the Send Shared functionality of Blockchain.info – by analyzing the transaction graph. While Bitcoin Fog and Blockchain.info successfully mix our transaction, we are able to find a direct relation between the input and output transactions in the graph of BitLaundry.", "title": "" }, { "docid": "d46c44e5a4bc2e0dd1423394534409d3", "text": "This paper describes a heterogeneous computer cluster called Axel. Axel contains a collection of nodes; each node can include multiple types of accelerators such as FPGAs (Field Programmable Gate Arrays) and GPUs (Graphics Processing Units). A Map-Reduce framework for the Axel cluster is presented which exploits spatial and temporal locality through different types of processing elements and communication channels. The Axel system enables the first demonstration of FPGAs, GPUs and CPUs running collaboratively for N-body simulation. Performance improvement from 4.4 times to 22.7 times has been achieved using our approach, which shows that the Axel system can combine the benefits of the specialization of FPGA, the parallelism of GPU, and the scalability of computer clusters.", "title": "" }, { "docid": "28d7c171b05309d9a4ec4aa9ec4f66e1", "text": "A cost and energy efficient method of wind power generation is to connect the output of the turbine to a doubly-fed induction generator (DFIG), allowing operation at a range of variable speeds. While for electrical engineers the electromagnetic components in such a system, like the electric machine, power electronic converter and magnetic filters are of most interest, a DFIG wind turbine is a complex design involving multiple physical domains strongly interacting with each other. The electrical system, for instance, is influenced by the converter’s cooling system and mechanical components, including the rotor blades, shaft and gearbox. This means that during component selection and design of control schemes, the influence of domains on one another must be considered in order to achieve an optimized overall system performance such that the design is dynamic, efficient and cost-effective. In addition to creating an accurate model of the entire system, it is also important to model the real-world operating and fault conditions. For fast prototyping and performance prediction, computer-based simulation has been widely adopted in the engineering development process. Modeling such complex systems while including switching power electronic converters requires a powerful and robust simulation tool. Furthermore, a rapid solver is critical to allow for developing multiple iterative enhancements based on insight gained through system simulation studies.", "title": "" }, { "docid": "90b59d264de9bc4054f4905c47e22596", "text": "Bronson (1974) reviewed evidence in support of the claim that the development of visually guided behavior in the human infant over the first few months of life represents a shift from subcortical to cortical visual processing. Recently, this view has been brought into question for two reasons; first, evidence revealing apparently sophisticated perceptual abilities in the newborn, and second, increasing evidence for multiple cortica streams of visual processing. The present paper presents a reanalysis of the relation between the maturation of cortical pathways and the development of visually guided behavior, focusing in particular on how the maturational state of the primary visual cortex may constrain the functioning of neural pathways subserving oculomotor control.", "title": "" }, { "docid": "e8824408140898ac81fba94530f6e43e", "text": "The Bag-of-Visual-Words model has emerged as an effective approach to represent local video features for human actions classification. However, one of the major challenges in this model is the generation of the visual vocabulary. In the case of human action recognition, losing spatial-temporal relationships is one of the important reasons that provokes the low descriptive power of classic visual words. In this work we propose a three-level approach to construct visual n-grams for human action classification. First, in order to reduce the number of non-descriptive words generated by K-means clustering of the spatio-temporal interest points, we propose to apply a variant of the classsical Leader-Follower clustering algorithm to create an optimal vocabulary from a pre-established number of visual words. Second, with the aim of incorporating spatial and temporal constraints to the Bag-of-Visual-Words model, we exploit the spatio-temporal relationships between interest points to build a graphbased representation of the video. Frequent subgraphs are extracted for each action class and a visual vocabulary of n-grams is constructed from the labels (descriptors) of selected subgraphs. Finally, we build a histogram by using the frequency of each n-gram in the graph representing a video of human action. The proposed approach combines the representational power of graphs with the efficiency of the Bag-of-Visual-Words model. Extensive validation on five challenging human actions datasets demonstrates the effectiveness of the proposed model compared to state-of-the-art methods.", "title": "" }, { "docid": "3902afc560de6f0b028315977bc55976", "text": "Traffic light congestion normally occurs in urban areas where the number of vehicles is too many on the road. This problem drives the need for innovation and provide efficient solutions regardless this problem. Smart system that will monitor the congestion level at the traffic light will be a new option to replace the old system which is not practical anymore. Implementing internet of thinking (IoT) technology will provide the full advantage for monitoring and creating a congestion model based on sensor readings. Multiple sensor placements for each lane will give a huge advantage in detecting vehicle and increasing the accuracy in collecting data. To gather data from each sensor, the LoRaWAN technology is utilized where it features low power wide area network, low cost of implementation and the communication is secure bi-directional for the internet of thinking. The radio frequency used between end nodes to gateways range is estimated around 15-kilometer radius. A series of test is carried out to estimate the range of signal and it gives a positive result. The level of congestion for each lane will be displayed on Grafana dashboard and the algorithm can be calculated. This provides huge advantages to the implementation of this project, especially the scope of the project will be focus in urban areas where the level of congestion is bad.", "title": "" }, { "docid": "3b6cef052cd7a7acc765b44292af51cc", "text": "Minimizing travel time is critical for the successful operation of emergency vehicles. Preemption can significantly help emergency vehicles reach the intended destination faster. Majority of the current studies focus on minimizing and/or eliminating delays for EVs and do not consider the negative impacts of preemption on urban traffic. One primary negative impact is extended delays for non-EV traffic due to preemption that is addressed in this paper. We propose an Adaptive Preemption of Traffic (APT) system for Emergency Vehicles in an Intelligent Transportation System. We utilize the knowledge of current traffic conditions in the transportation system to adaptively preempt traffic at signals along the path of EVs so as to minimize, if not eliminate stopped delays for EVs while simultaneously minimizing the delays for non-emergency vehicles in the system. Through extensive simulation results, we show substantial reduction in delays for both EVs.", "title": "" } ]
scidocsrr
5236af127d16f754b00e4793a3df0781
Cost-aware travel tour recommendation
[ { "docid": "13b887760a87bc1db53b16eb4fba2a01", "text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.", "title": "" }, { "docid": "8c043576bd1a73b783890cdba3a5e544", "text": "We present a novel approach to collaborative prediction, using low-norm instead of low-rank factorizations. The approach is inspired by, and has strong connections to, large-margin linear discrimination. We show how to learn low-norm factorizations by solving a semi-definite program, and discuss generalization error bounds for them.", "title": "" }, { "docid": "21756eeb425854184ba2ea722a935928", "text": "Collaborative filtering aims at learning predictive models of user preferences, interests or behavior from community data, that is, a database of available user preferences. In this article, we describe a new family of model-based algorithms designed for this task. These algorithms rely on a statistical modelling technique that introduces latent class variables in a mixture model setting to discover user communities and prototypical interest profiles. We investigate several variations to deal with discrete and continuous response variables as well as with different objective functions. The main advantages of this technique over standard memory-based methods are higher accuracy, constant time prediction, and an explicit and compact model representation. The latter can also be used to mine for user communitites. The experimental evaluation shows that substantial improvements in accucracy over existing methods and published results can be obtained.", "title": "" } ]
[ { "docid": "cbde86d9b73371332a924392ae1f10d0", "text": "The difficulty to solve multiple objective combinatorial optimization problems with traditional techniques has urged researchers to look for alternative, better performing approaches for them. Recently, several algorithms have been proposed which are based on the Ant Colony Optimization metaheuristic. In this contribution, the existing algorithms of this kind are reviewed and experimentally tested in several instances of the bi-objective traveling salesman problem, comparing their performance with that of two well-known multi-objective genetic algorithms.", "title": "" }, { "docid": "e1f2647131e9194bc4edfd9c629900a8", "text": "Thomson coil actuators (also known as repulsion coil actuators) are well suited for vacuum circuit breakers when fast operation is desired such as for hybrid AC and DC circuit breaker applications. This paper presents investigations on how the actuator drive circuit configurations as well as their discharging pulse patterns affect the magnetic force and therefore the acceleration, as well as the mechanical robustness of these actuators. Comprehensive multi-physics finite-element simulations of the Thomson coil actuated fast mechanical switch are carried out to study the operation transients and how to maximize the actuation speed. Different drive circuits are compared: three single switch circuits are evaluated; the pulse pattern of a typical pulse forming network circuit is studied, concerning both actuation speed and maximum stress; a two stage drive circuit is also investigated. A 630 A, 15 kV / 1 ms prototype employing a vacuum interrupter with 6 mm maximum open gap was developed and tested. The total moving mass accelerated by the actuator is about 1.2 kg. The measured results match well with simulated results in the FEA study.", "title": "" }, { "docid": "33aeefad356ea15487894e2c9b9717f4", "text": "The Netflix Prize (NP) competition gave much attention to collaborative filtering (CF) approaches. Matrix factorization (MF) based CF approaches assign low dimensional feature vectors to users and items. We link CF and content-based filtering (CBF) by finding a linear transformation that transforms user or item descriptions so that they are as close as possible to the feature vectors generated by MF for CF. We propose methods for explicit feedback that are able to handle 140,000 features when feature vectors are very sparse. With movie metadata collected for the NP movies we show that the prediction performance of the methods is comparable to that of CF, and can be used to predict user preferences on new movies. We also investigate the value of movie metadata compared to movie ratings in regards of predictive power. We compare our solely CBF approach with a simple baseline rating-based predictor. We show that even 10 ratings of a new movie are more valuable than its metadata for predicting user ratings.", "title": "" }, { "docid": "a5f7a243e68212e211d9d89da06ceae1", "text": "A new technique to achieve a circularly polarized probe-fed single-layer microstrip-patch antenna with a wideband axial ratio is proposed. The antenna is a modified form of the conventional E-shaped patch, used to broaden the impedance bandwidth of a basic patch antenna. By letting the two parallel slots of the E patch be unequal, asymmetry is introduced. This leads to two orthogonal currents on the patch and, hence, circularly polarized fields are excited. The proposed technique exhibits the advantage of the simplicity of the E-shaped patch design, which requires only the slot lengths, widths, and position parameters to be determined. Investigations of the effect of various dimensions of the antenna have been carried out via parametric analysis. Based on these investigations, a design procedure for a circularly polarized E-shaped patch was developed. A prototype has been designed, following the suggested procedure for the IEEE 802.11big WLAN band. The performance of the fabricated antenna was measured and compared with simulation results. Various examples with different substrate thicknesses and material types are presented and compared with the recently proposed circularly polarized U-slot patch antennas.", "title": "" }, { "docid": "002d6e5a13bc605746b4c8a6b9ecd498", "text": "The properties of the so-called time dependent dielectric breakdown (TDDB) of silicon dioxide-based gate dielectric for microelectronics technology have been investigated and reviewed. Experimental data covering a wide range of oxide thickness, stress voltage, temperature, and for the two bias polarities were gathered using structures with a wide range of gate oxide areas, and over very long stress times. Thickness dependence of oxide breakdown was shown to be in excellent agreement with statistical models founded in the percolation theory which explain the drastic reduction of the time-to-breakdown with decreasing oxide thickness. The voltage dependence of time-to-breakdown was found to follow a power-law behavior rather than an exponential law as commonly assumed. Our investigation on the inter-relationship between voltage and temperature dependencies of oxide breakdown reveals that a strong temperature activation with non-Arrhenius behavior is consistent with the power-law voltage dependence. The power-law voltage dependence in combination with strong temperature activation provides the most important reliability relief in compensation for the strong decrease of time-to-breakdown resulting from the reduction of the oxide thickness. Using the maximum energy of injected electrons at the anode interface as breakdown variable, we have resolved the polarity gap of timeand charge-to-breakdown (TBD and QBD), confirming that the fluency and the electron energy at anode interface are the fundamental quantities controlling oxide breakdown. Combining this large database with a recently proposed cell-based analytical version of the percolation model, we extract the defect generation efficiency responsible for breakdown. Following a review of different breakdown mechanisms and models, we discuss how the release of hydrogen through the coupling between vibrational and electronic degrees of freedom can explain the power-law dependence of defect generation efficiency. On the basis of these results, a unified and global picture of oxide breakdown is constructed and the resulting model is applied to project reliability limits. In this regard, it is concluded that SiO2-based dielectrics can provide reliable gate dielectric, even to a thickness of 1 nm, and that CMOS scaling may well be viable for the 50 nm technology node. 2005 Elsevier Ltd. All rights reserved. 0026-2714/$ see front matter 2005 Elsevier Ltd. All rights reserv doi:10.1016/j.microrel.2005.04.004 * Corresponding author. Tel.: +1 802 769 1217; fax: +1 802 769 1220. E-mail address: eywu@us.ibm.com (E.Y. Wu).", "title": "" }, { "docid": "2d7fb00b932f3d65f88307fd219a537c", "text": "Web sites have been extensively used to provide information to consumers. While practitioners and researchers have proposed different criteria for effective Web site design based on common sense, intuition, and rules-ofthumb, effective Web site design focusing on the quality of the information it provides has rarely been studied. In this research, we propose a framework and develop an instrument to measure the information quality of individual or personal Web sites. The theoretical foundation of this research is the information quality framework. The proposed framework and instrument were tested in an individual or personal Web site context.", "title": "" }, { "docid": "099b00a2b60ece15a12710016614c562", "text": "Network Function Virtualization (NFV) is a promising technology that promises to significantly reduce the operational costs of network services by deploying virtualized network functions (VNFs) to commodity servers in place of dedicated hardware middleboxes. The VNFs are typically running on virtual machine instances in a cloud infrastructure, where the virtualization technology enables dynamic provisioning of VNF instances, to process the fluctuating traffic that needs to go through the network functions in a network service. In this paper, we target dynamic provisioning of enterprise network services - expressed as one or multiple service chains - in cloud datacenters, and design efficient online algorithms without requiring any information on future traffic rates. The key is to decide the number of instances of each VNF type to provision at each time, taking into consideration the server resource capacities and traffic rates between adjacent VNFs in a service chain. In the case of a single service chain, we discover an elegant structure of the problem and design an efficient randomized algorithm achieving a e/(e-1) competitive ratio. For multiple concurrent service chains, an online heuristic algorithm is proposed, which is O(1)-competitive. We demonstrate the effectiveness of our algorithms using solid theoretical analysis and trace-driven simulations.", "title": "" }, { "docid": "08e21e7a4e944f06c4a4502dcdb3d854", "text": "Numquam ponenda est pluralitas sine necessitate 'Plurality should never be proposed unless needed' William of Occam Classification lies at the heart of both human and machine intelligence. Deciding what letter, word, or image has been presented to our senses, recognizing faces or voices, sorting mail, assigning grades to homeworks, are all examples of assigning a class or category to an input. The potential challenges of this task are highlighted by the fabulist Jorge Luis Borges (1964), who imagined classifying animals into: (a) those that belong to the Emperor, (b) embalmed ones, (c) those that are trained, (d) suckling pigs, (e) mermaids, (f) fabulous ones, (g) stray dogs, (h) those that are included in this classification, (i) those that tremble as if they were mad, (j) innumerable ones, (k) those drawn with a very fine camel's hair brush, (l) others, (m) those that have just broken a flower vase, (n) those that resemble flies from a distance. While many language processing tasks can be productively viewed as tasks of classification, the classes are luckily far more practical than those of Borges. In this chapter we present two general algorithms for classification, demonstrated on one important set of classification problems: text categorization, the task of classifying text categorization an entire text by assigning it a label drawn from some set of labels. We focus on one common text categorization task, sentiment analysis, the ex-sentiment analysis traction of sentiment, the positive or negative orientation that a writer expresses toward some object. A review of a movie, book, or product on the web expresses the author's sentiment toward the product, while an editorial or political text expresses sentiment toward a candidate or political action. Automatically extracting consumer sentiment is important for marketing of any sort of product, while measuring public sentiment is important for politics and also for market prediction. The simplest version of sentiment analysis is a binary classification task, and the words of the review provide excellent cues. Consider, for example, the following phrases extracted from positive and negative reviews of movies and restaurants,. Words like great, richly, awesome, and pathetic, and awful and ridiculously are very informative cues: + ...zany characters and richly applied satire, and some great plot twists − It was pathetic. The worst part about it was the boxing scenes... + ...awesome caramel sauce and sweet toasty almonds. I love this place! − ...awful pizza and ridiculously overpriced... …", "title": "" }, { "docid": "ad00866e5bae76020e02c6cc76360ec8", "text": "The CASAS architecture facilitates the development and implementation of future smart home technologies by offering an easy-to-install lightweight design that provides smart home capabilities out of the box with no customization or training.", "title": "" }, { "docid": "fe98350e6fa6d91a2e63dc19646a0307", "text": "One of the most widely studied systems of argumentation is the one described by Dung in a paper from 1995. Unfortunately, this framework does not allow for joint attacks on arguments, which we argue must be required of any truly abstract argumentation framework. A few frameworks can be said to allow for such interactions among arguments, but for various reasons we believe that these are inadequate for modelling argumentation systems with joint attacks. In this paper we propose a generalization of the framework of Dung, which allows for sets of arguments to attack other arguments. We extend the semantics associated with the original framework to this generalization, and prove that all results in the paper by Dung have an equivalent in this more abstract framework.", "title": "" }, { "docid": "0570bf6abea7b8c4dcad1fb05b9672c6", "text": "The purpose of this chapter is to describe some similarities, as well as differences, between theoretical proposals emanating from the tradition of phenomenology and the currently popular approach to language and cognition known as cognitive linguistics (hence CL). This is a rather demanding and potentially controversial topic. For one thing, neither CL nor phenomenology constitute monolithic theories, and are actually rife with internal controversies. This forces me to make certain “schematizations”, since it is impossible to deal with the complexity of these debates in the space here allotted.", "title": "" }, { "docid": "2642188d1f62f49450b9034f9180baa5", "text": "A graphical abstract (GA) provides a concise visual summary of a scientific contribution. GAs are increasingly required by journals to help make scientific publications more accessible to readers. We characterize the design space of GAs through a qualitative analysis of 54 GAs from a range of disciplines, and descriptions of GA design principles from scientific publishers. We present a set of design dimensions, visual structures, and design templates that describe how GAs communicate via pictorial and symbolic elements. By reflecting on how GAs employ visual metaphors, representational genres, and text relative to prior characterizations of how diagrams communicate, our work sheds light on how and why GAs may be distinct. We outline steps for future work at the intersection of HCI, AI, and scientific communication aimed at the creation of GAs.", "title": "" }, { "docid": "3a3470d13c9c63af1a62ee7bc57a96ef", "text": "Cloud computing is a distributed computing model that still faces problems. New ideas emerge to take advantage of its features and among the research challenges found in the cloud, we can highlight Identity and Access Management. The main problems of the application of access control in the cloud are the necessary flexibility and scalability to support a large number of users and resources in a dynamic and heterogeneous environment, with collaboration and information sharing needs. This paper proposes the use of risk-based dynamic access control for cloud computing. The proposal is presented as an access control model based on an extension of the XACML standard with three new components: the Risk Engine, the Risk Quantification Web Services and the Risk Policies. The risk policies present a method to describe risk metrics and their quantification, using local or remote functions. The risk policies allow users and cloud service providers to define how to handle risk-based access control for their resources, using different quantification and aggregation methods. The model reaches the access decision based on a combination of XACML decisions and risk analysis. A prototype of the model is implemented, showing it has enough expressivity to describe the models of related work. In the experimental results, the prototype takes between 2 and 6 milliseconds to reach access decisions using a risk policy. A discussion on the security aspects of the model is also presented.", "title": "" }, { "docid": "c4337c7a5b53a07e41f94976418ac293", "text": "Deep neural network has shown remarkable performance in solving computer vision and some graph evolved tasks, such as node classification and link prediction. However, the vulnerability of deep model has also been revealed by carefully designed adversarial examples generated by various adversarial attack methods. With the wider application of deep model in complex network analysis, in this paper we define and formulate the link prediction adversarial attack problem and put forward a novel iterative gradient attack (IGA) based on the gradient information in trained graph auto-encoder (GAE). To our best knowledge, it is the first time link prediction adversarial attack problem is defined and attack method is brought up. Not surprisingly, GAE was easily fooled by adversarial network with only a few links perturbed on the clean network. By conducting comprehensive experiments on different real-world data sets, we can conclude that most deep model based and other state-of-art link prediction algorithms cannot escape the adversarial attack just like GAE. We can benefit the attack as an efficient privacy protection tool from link prediction unknown violation, on the other hand, link prediction attack can be a robustness evaluation metric for current link prediction algorithm in attack defensibility.", "title": "" }, { "docid": "74770d8f7e0ac066badb9760a6a2b925", "text": "Memristor-based synaptic network has been widely investigated and applied to neuromorphic computing systems for the fast computation and low design cost. As memristors continue to mature and achieve higher density, bit failures within crossbar arrays can become a critical issue. These can degrade the computation accuracy significantly. In this work, we propose a defect rescuing design to restore the computation accuracy. In our proposed design, significant weights in a specified network are first identified and retraining and remapping algorithms are described. For a two layer neural network with 92.64% classification accuracy on MNIST digit recognition, our evaluation based on real device testing shows that our design can recover almost its full performance when 20% random defects are present.", "title": "" }, { "docid": "9f40a57159a06ecd9d658b4d07a326b5", "text": "_____________________________________________________________________________ The aim of the present study was to investigate a cytotoxic oxidative cell stress related and the antioxidant profile of kaempferol, quercetin, and isoquercitrin. The flavonol compounds were able to act as scavengers of superoxide anion (but not hydrogen peroxide), hypochlorous acid, chloramine and nitric oxide. Although flavonoids are widely described as antioxidants and this activity is generally related to beneficial effects on human health, here we show important cytotoxic actions of three well known flavonoids. They were able to promote hemolysis which one was exacerbated on the presence of hypochlorous acid but not by AAPH radical. Therefore, WWW.SCIELO.BR/EQ VOLUME 36, NÚMERO 2, 2011", "title": "" }, { "docid": "a16b9bbb9675a14952527fb4de583d00", "text": "Adaptations in resistance training are focused on the development and maintenance of the neuromuscular unit needed for force production [97, 136]. The effects of training, when using this system, affect many other physiological systems of the body (e.g., the connective tissue, cardiovascular, and endocrine systems) [16, 18, 37, 77, 83]. Training programs are highly specific to the types of adaptation that occur. Activation of specific patterns of motor units in training dictate what tissue and how other physiological systems will be affected by the exercise training. The time course of the development of the neuromuscular system appears to be dominated in the early phase by neural factors with associated changes in the types of contractile proteins. In the later adaptation phase, muscle protein increases, and the contractile unit begins to contribute the most to the changes in performance capabilities. A host of other factors can affect the adaptations, such as functional capabilities of the individual, age, nutritional status, and behavioral factors (e.g., sleep and health habits). Optimal adaptation appears to be related to the use of specific resistance training programs to meet individual training objectives.", "title": "" }, { "docid": "bab06ca527f4a56eff82ef486ac7d728", "text": "The meaning of a sentence is a function of the relations that hold between its words. We instantiate this relational view of semantics in a series of neural models based on variants of relation networks (RNs) which represent a set of objects (for us, words forming a sentence) in terms of representations of pairs of objects. We propose two extensions to the basic RN model for natural language. First, building on the intuition that not all word pairs are equally informative about the meaning of a sentence, we use constraints based on both supervised and unsupervised dependency syntax to control which relations influence the representation. Second, since higher-order relations are poorly captured by a sum of pairwise relations, we use a recurrent extension of RNs to propagate information so as to form representations of higher order relations. Experiments on sentence classification, sentence pair classification, and machine translation reveal that, while basic RNs are only modestly effective for sentence representation, recurrent RNs with latent syntax are a reliably powerful representational device.", "title": "" }, { "docid": "0ce4a0dfe5ea87fb87f5d39b13196e94", "text": "Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.", "title": "" }, { "docid": "e62ad0c67fa924247f05385bda313a38", "text": "Artificial neural networks have been recognized as a powerful tool for pattern classification problems, but a number of researchers have also suggested that straightforward neural-network approaches to pattern recognition are largely inadequate for difficult problems such as handwritten numeral recognition. In this paper, we present three sophisticated neural-network classifiers to solve complex pattern recognition problems: multiple multilayer perceptron (MLP) classifier, hidden Markov model (HMM)/MLP hybrid classifier, and structure-adaptive self-organizing map (SOM) classifier. In order to verify the superiority of the proposed classifiers, experiments were performed with the unconstrained handwritten numeral database of Concordia University, Montreal, Canada. The three methods have produced 97.35%, 96.55%, and 96.05% of the recognition rates, respectively, which are better than those of several previous methods reported in the literature on the same database.", "title": "" } ]
scidocsrr
da2ed32edd2a329f2cbd1aafbc314048
Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition
[ { "docid": "2e5a3cd852a53b018032804f77088d03", "text": "A general method for text localization and recognition in real-world images is presented. The proposed method is novel, as it (i) departs from a strict feed-forward pipeline and replaces it by a hypothesesverification framework simultaneously processing multiple text line hypotheses, (ii) uses synthetic fonts to train the algorithm eliminating the need for time-consuming acquisition and labeling of real-world training data and (iii) exploits Maximally Stable Extremal Regions (MSERs) which provides robustness to geometric and illumination conditions. The performance of the method is evaluated on two standard datasets. On the Char74k dataset, a recognition rate of 72% is achieved, 18% higher than the state-of-the-art. The paper is first to report both text detection and recognition results on the standard and rather challenging ICDAR 2003 dataset. The text localization works for number of alphabets and the method is easily adapted to recognition of other scripts, e.g. cyrillics.", "title": "" }, { "docid": "7197dbee035c62044a93d4e60762e3ea", "text": "The problem of detecting and recognizing text in natural scenes has proved to be more challenging than its counterpart in documents, with most of the previous work focusing on a single part of the problem. In this work, we propose new solutions to the character and word recognition problems and then show how to combine these solutions in an end-to-end text-recognition system. We do so by leveraging the recently introduced Maxout networks along with hybrid HMM models that have proven useful for voice recognition. Using these elements, we build a tunable and highly accurate recognition system that beats state-of-theart results on all the sub-problems for both the ICDAR 2003 and SVT benchmark datasets.1", "title": "" }, { "docid": "a43e646ee162a23806c3b8f0a9d69b23", "text": "This paper describes the results of the ICDAR 2005 competition for locating text in camera captured scenes. For this we used the same data as the ICDAR 2003 competition, which has been kept private until now. This allows a direct comparison with the 2003 entries. The main result is that the leading 2005 entry has improved significantly on the leading 2003 entry, with an increase in average f-score from 0.5 to 0.62, where the f-score is the same adapted information retrieval measure used for the 2003 competition. The paper also discusses the Web-based deployment and evaluation of text locating systems, and one of the leading entries has now been deployed in this way. This mode of usage could lead to more complete and more immediate knowledge of the strengths and weaknesses of each newly developed system.", "title": "" }, { "docid": "26fc8289a213c51b43777fc909eaeb7e", "text": "This paper tackles the problem of recognizing characters in images of natural scenes. In particular, we focus on recognizing characters in situations that would traditionally not be handled well by OCR techniques. We present an annotated database of images containing English and Kannada characters. The database comprises of images of street scenes taken in Bangalore, India using a standard camera. The problem is addressed in an object cateogorization framework based on a bag-of-visual-words representation. We assess the performance of various features based on nearest neighbour and SVM classification. It is demonstrated that the performance of the proposed method, using as few as 15 training images, can be far superior to that of commercial OCR systems. Furthermore, the method can benefit from synthetically generated training data obviating the need for expensive data collection and annotation.", "title": "" } ]
[ { "docid": "59c83aa2f97662c168316f1a4525fd4d", "text": "Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.", "title": "" }, { "docid": "3ec2678c6e0b7b8eb92ab5b2fc1ca504", "text": "The current trend towards smaller and smaller mobile devices may cause considerable difficulties in using them. In this paper, we propose an interface called Anywhere Surface Touch, which allows any flat or curved surface in a real environment to be used as an input area. The interface uses only a single small camera and a contact microphone to recognize several kinds of interaction between the fingers of the user and the surface. The system recognizes which fingers are interacting and in which direction the fingers are moving. Additionally, the fusion of vision and sound allows the system to distinguish the contact conditions between the fingers and the surface. Evaluation experiments showed that users became accustomed to our system quickly, soon being able to perform input operations on various surfaces.", "title": "" }, { "docid": "244745da710e8c401173fe39359c7c49", "text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.", "title": "" }, { "docid": "2f7dd12e2bc56cddfa4b2dbd7e7a8c1a", "text": "and the Alfred P. Sloan Foundation. Appleyard received support from the National Science Foundation under Grant No. 0438736. Jon Perr and Patrick Sullivan ably assisted with the interviews of Open Source Software leaders. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the above funding sources or any other individuals or organizations. Open Innovation and Strategy", "title": "" }, { "docid": "e1f531740891d47387a2fc2ef4f71c46", "text": "Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricide tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reordering and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.", "title": "" }, { "docid": "4d8f38413169a572c0087fd180a97e44", "text": "As continued scaling of silicon FETs grows increasingly challenging, alternative paths for improving digital system energy efficiency are being pursued. These paths include replacing the transistor channel with emerging nanomaterials (such as carbon nanotubes), as well as utilizing negative capacitance effects in ferroelectric materials in the FET gate stack, e.g., to improve sub-threshold slope beyond the 60 mV/decade limit. However, which path provides the largest energy efficiency benefits—and whether these multiple paths can be combined to achieve additional energy efficiency benefits—is still unclear. Here, we experimentally demonstrate the first negative capacitance carbon nanotube FETs (CNFETs), combining the benefits of both carbon nanotube channels and negative capacitance effects. We demonstrate negative capacitance CNFETs, achieving sub-60 mV/decade sub-threshold slope with an average sub-threshold slope of 55 mV/decade at room temperature. The average ON-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{ON}}$ </tex-math></inline-formula>) of these negative capacitance CNFETs improves by <inline-formula> <tex-math notation=\"LaTeX\">$2.1\\times $ </tex-math></inline-formula> versus baseline CNFETs, (i.e., without negative capacitance) for the same OFF-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{OFF}}$ </tex-math></inline-formula>). This work demonstrates a promising path forward for future generations of energy-efficient electronic systems.", "title": "" }, { "docid": "dd942595f8187493ce08706401350969", "text": "We study the problem of cooperative multi-agent reinforcement learning with a single joint reward signal. This class of learning problems is difficult because of the often large combined action and observation spaces. In the fully centralized and decentralized approaches, we find the problem of spurious rewards and a phenomenon we call the “lazy agent” problem, which arises due to partial observability. We address these problems by training individual agents with a novel value-decomposition network architecture, which learns to decompose the team value function into agent-wise value functions.", "title": "" }, { "docid": "b894e6a16f5082bc3c28894fedc87232", "text": "Goal: The use of an online game for learning in higher education aims to make complex theoretical knowledge more approachable. Permanent repetition will lead to a more in-depth learning. Objective: To gain insight into whether and to what extent, online games have the potential to contribute to student learning in higher education. Experimental Setting: The online game was used for the first time during a lecture on Structural Concrete at Master’s level, involving 121 seventh semester students. Methods: Pretest/posttest experimental control group design with questionnaires and an independent online evaluation. Results: The minimum learning result of playing the game was equal to that achieved with traditional methods. A factor called “joy” was introduced, according to Nielsen (2002), which was amazingly high. Conclusion: The experimental findings support the efficacy of game playing. Students enjoyed this kind of e-Learning.", "title": "" }, { "docid": "9235935bc5fdc927a88cb797d6b90ffa", "text": "The wireless sensor network \"macroscope\" offers the potential to advance science by enabling dense temporal and spatial monitoring of large physical volumes. This paper presents a case study of a wireless sensor network that recorded 44 days in the life of a 70-meter tall redwood tree, at a density of every 5 minutes in time and every 2 meters in space. Each node measured air temperature, relative humidity, and photosynthetically active solar radiation. The network captured a detailed picture of the complex spatial variation and temporal dynamics of the microclimate surrounding a coastal redwood tree. This paper describes the deployed network and then employs a multi-dimensional analysis methodology to reveal trends and gradients in this large and previously-unobtainable dataset. An analysis of system performance data is then performed, suggesting lessons for future deployments.", "title": "" }, { "docid": "59608978a30fcf6fc8bc0b92982abe69", "text": "The self-advocacy movement (Dybwad & Bersani, 1996) grew out of resistance to oppressive practices of institutionalization (and worse) for people with cognitive disabilities. Moving beyond the worst abuses, people with cognitive disabilities seek as full participation in society as possible.", "title": "" }, { "docid": "f8956705295a454b99eb81bd41f0e8aa", "text": "Virtual Reality systems have drawn much attention by researchers and companies in the last few years. Virtual Reality is a term that applies to computer-simulated environments that can simulate physical presence in places in the real world, as well as in imaginary worlds. Interactivity and its captivating power, contribute to the feeling of being the part of the action on the virtual safe environment, without any real danger. So, Virtual Reality has been a promising technology applicable in various domains of application such as training simulators, medical and health care, education, scientific visualization, and entertainment industry. Virtual reality can lead to state of the art technologies like Second Life, too. Like many advantageous technologies, beside opportunities of Virtual Reality and Second Life, inevitable challenges appear, too. This paper is a technical brief on Virtual Reality technology and its opportunities and challenges in different areas.", "title": "" }, { "docid": "f5e6df40898a5b84f8e39784f9b56788", "text": "OBJECTIVE\nTo determine the prevalence of anxiety and depression among medical students at Nishtar Medical College, Multan.\n\n\nMETHODS\nA cross-sectional study was carried out at Nishtar Medical College, Multan in 2008. The questionnaire was administered to 815 medical students who had spent more than 6 months in college and had no self reported physical illness. They were present at the time of distribution of the questionnaires and consented. Prevalence of anxiety and depression was assessed using a structured validated questionnaire, the Aga Khan University Anxiety and Depression Scale with a cut-off score of 19. Data Analysis was done using SPSS v. 14.\n\n\nRESULTS\nOut of 815 students, 482 completed the questionnaire with a response rate of 59.14%. The mean age of students was 20.66 +/- 1.8 years. A high prevalence of anxiety and depression (43.89%) was found amongst medical students. Prevalence of anxiety and depression among students of first, second, third, fourth and final years was 45.86%, 52.58%, 47.14%, 28.75% and 45.10% respectively. Female students were found to be more depressed than male students (OR = 2.05, 95% CI = 1.42-2.95, p = 0.0001). There was a significant association between the prevalence of anxiety and depression and the respective year of medical college (p = 0.0276). It was seen that age, marital status, locality and total family income did not significantly affect the prevalence of anxiety and depression.\n\n\nCONCLUSIONS\nThe results showed that medical students constitute a vulnerable group that has a high prevalence of psychiatric morbidity comprising of anxiety and depression.", "title": "" }, { "docid": "8b4ddcb98f8a5c5e51f02c23b0aee764", "text": "The problem of identifying approximately duplicate record in database is an essential step for data cleaning & data integration process. A dynamic web page is displayed to show the results as well as other relevant advertisements that seem relevant to the query. The real world entities have two or more representation in databases. When dealing with large amount of data it is important that there be a well defined and tested mechanism to filter out duplicate result. This keeps the result relevant to the queries. Duplicate record exists in the query result of many web databases especially when the duplicates are defined based on only some of the fields in a record. Using exact matching technique Records that are exactly same can be detected. The system that helps user to integrate and compares the query results returned from multiple web databases matches the different sources records that referred to the same real world entity. In this paper, we analyze the literature on duplicate record detection. We cover similarity metrics which are commonly used to detect similar field entries, and present an extensive set of duplicate detection algorithms that can detect approximately duplicate records in a database also the techniques for improving the efficiency and scalability of approximate duplicate detection algorithms are covered. We conclude with coverage of existing tools and with a brief discussion of the big open problems in the area.", "title": "" }, { "docid": "3716c5aa7139aeb5ec6db87da7f0285d", "text": "In a temporal database, time values are associated with data item to indicate their periods of validity. We propose a model for temporal databases within the framework of the classical database theory. Our model is realized as a temporal parameterization of static relations. We do not impose any restrictions upon the schemes of temporal relations. The classical concepts of normal forms and dependencies are easily extended to our model, allowing a suitable design for a database scheme. We present a relational algebra and a tuple calculus for our model and prove their equivalence. Our data model is homogeneous in the sense that the periods of validity of all the attributes in a given tuple of a temporal relation are identical. We discuss how to relax the homogeneity requirement to extend the application domain of our approach.", "title": "" }, { "docid": "0991b582ad9fcc495eb534ebffe3b5f8", "text": "A computationally cheap extension from single-microphone acoustic echo cancellation (AEC) to multi-microphone AEC is presented for the case of a single loudspeaker. It employs the idea of common-acoustical-pole and zero modeling of room transfer functions (RTFs). The RTF models used for multi-microphone AEC share a fixed common denominator polynomial, which is calculated off-line by means of a multi-channel warped linear prediction. By using the common denominator polynomial as a prefilter, only the numerator polynomial has to be estimated recursively for each microphone, hence adapting to changes in the RTFs. This approach allows to decrease the number of numerator coefficients by one order of magnitude for each microphone compared with all-zero modeling. In a first configuration, the prefiltering is done on the adaptive filter signal, hence achieving a pole-zero model of the RTF in the AEC. In a second configuration, the (inverse) prefiltering is done on the loudspeaker signal, hence achieving a dereverberation effect, in addition to AEC, on the microphone signals.", "title": "" }, { "docid": "8c54780de6c8d8c3fa71b31015ad044e", "text": "Integrins are cell surface receptors for extracellular matrix proteins and play a key role in cell survival, proliferation, migration and gene expression. Integrin signaling has been shown to be deregulated in several types of cancer, including prostate cancer. This review is focused on integrin signaling pathways known to be deregulated in prostate cancer and known to promote prostate cancer progression.", "title": "" }, { "docid": "296f18277958621763646519a7224193", "text": "This chapter examines health promotion and disease prevention from the perspective of social cognitive theory. This theory posits a multifaceted causal structure in which self-efficacy beliefs operate in concert with cognized goals, outcome expectations, and perceived environmental impediments and facilitators in the regulation of human motivation, action, and well-being. Perceived self-efficacy is a key factor in the causal structure because it operates on motivation and action both directly and through its impact on the other determinants. The areas of overlap of sociocognitive determinants with some of the most widely applied psychosocial models of health are identified. Social cognitive theory addresses the sociostructural determinants of health as well as the personal determinants. A comprehensive approach to health promotion requires changing the practices of social systems that have widespread detrimental effects on health rather than solely changing the habits of individuals. Further progress in this field requires building new structures for health promotion, new systems for risk reduction and greater emphasis on health policy initiatives. People's beliefs in their collective efficacy to accomplish social change, therefore, play a key role in the policy and public health perspective to health promotion and disease prevention. Bandura, A. (1998). Health promotion from the perspective of social cognitive theory. Psychology and Health, 13, 623-649.", "title": "" }, { "docid": "46714f589bdf57d734fc4eff8741d39b", "text": "As an essential operation in data cleaning, the similarity join has attracted considerable attention from the database community. In this article, we study string similarity joins with edit-distance constraints, which find similar string pairs from two large sets of strings whose edit distance is within a given threshold. Existing algorithms are efficient either for short strings or for long strings, and there is no algorithm that can efficiently and adaptively support both short strings and long strings. To address this problem, we propose a new filter, called the segment filter. We partition a string into a set of segments and use the segments as a filter to find similar string pairs. We first create inverted indices for the segments. Then for each string, we select some of its substrings, identify the selected substrings from the inverted indices, and take strings on the inverted lists of the found substrings as candidates of this string. Finally, we verify the candidates to generate the final answer. We devise efficient techniques to select substrings and prove that our method can minimize the number of selected substrings. We develop novel pruning techniques to efficiently verify the candidates. We also extend our techniques to support normalized edit distance. Experimental results show that our algorithms are efficient for both short strings and long strings, and outperform state-of-the-art methods on real-world datasets.", "title": "" }, { "docid": "813e41234aad749022a4d655af987ad6", "text": "Three- and four-element eyepiece designs are presented each with a different type of radial gradient-index distribution. Both quadratic and modified quadratic index profiles are shown to provide effective control of the field aberrations. In particular, the three-element design with a quadratic index profile demonstrates that the inhomogeneous power contribution can make significant contributions to the overall system performance, especially the astigmatism correction. Using gradient-index components has allowed for increased eye relief and field of view making these designs comparable with five- and six-element ones.", "title": "" }, { "docid": "febed6b06359fe35437e7fa16ed0cbfa", "text": "Videos recorded on moving cameras are often known to be shaky due to unstable carrier motion and the video stabilization problem involves inferring the intended smooth motion to keep and the unintended shaky motion to remove. However, conventional methods typically require proper, scenario-specific parameter setting, which does not generalize well across different scenarios. Moreover, we observe that a stable video should satisfy two conditions: a smooth trajectory and consistent inter-frame transition. While conventional methods only target at the former condition, we address these two issues at the same time. In this paper, we propose a homography consistency based algorithm to directly extract the optimal smooth trajectory and evenly distribute the inter-frame transition. By optimizing in the homography domain, our method does not need further matrix decomposition and parameter adjustment, automatically adapting to all possible types of motion (eg. translational or rotational) and video properties (eg. frame rates). We test our algorithm on translational videos recorded from a car and rotational videos from a hovering aerial vehicle, both of high and low frame rates. Results show our method widely applicable to different scenarios without any need of additional parameter adjustment.", "title": "" } ]
scidocsrr
b95fc68fc7586b8f0b79c21da59bdca6
Integrated Speech Enhancement Method Based on Weighted Prediction Error and DNN for Dereverberation and Denoising
[ { "docid": "413b21bece889166a385651ba5cd8512", "text": "Monaural speech separation is a fundamental problem in robust speech processing. Recently, deep neural network (DNN)-based speech separation methods, which predict either clean speech or an ideal time-frequency mask, have demonstrated remarkable performance improvement. However, a single DNN with a given window length does not leverage contextual information sufficiently, and the differences between the two optimization objectives are not well understood. In this paper, we propose a deep ensemble method, named multicontext networks, to address monaural speech separation. The first multicontext network averages the outputs of multiple DNNs whose inputs employ different window lengths. The second multicontext network is a stack of multiple DNNs. Each DNN in a module of the stack takes the concatenation of original acoustic features and expansion of the soft output of the lower module as its input, and predicts the ratio mask of the target speaker; the DNNs in the same module employ different contexts. We have conducted extensive experiments with three speech corpora. The results demonstrate the effectiveness of the proposed method. We have also compared the two optimization objectives systematically and found that predicting the ideal time-frequency mask is more efficient in utilizing clean training speech, while predicting clean speech is less sensitive to SNR variations.", "title": "" } ]
[ { "docid": "f565a815207932f6603b19fc57b02d4c", "text": "This study was aimed at extending the use of assistive technology (i.e., photocells, interface and personal computer) to support choice strategies by three girls with Rett syndrome and severe to profound developmental disabilities. A second purpose of the study was to reduce stereotypic behaviors exhibited by the participants involved (i.e., body rocking, hand washing and hand mouthing). Finally, a third goal of the study was to monitor the effects of such program on the participants' indices of happiness. The study was carried out according to a multiple probe design across responses for each participant. Results showed that the three girls increased the adaptive responses and decreased the stereotyped behaviors during intervention phases compared to baseline. Moreover, during intervention phases, the indices of happiness augmented for each girl as well. Clinical, psychological and rehabilitative implications of the findings are discussed.", "title": "" }, { "docid": "d59bd1ac3d670ef980d16cf51041849c", "text": "Mutation analysis evaluates a testing or debugging technique by measuring how well it detects mutants, which are systematically seeded, artificial faults. Mutation analysis is inherently expensive due to the large number of mutants it generates and due to the fact that many of these generated mutants are not effective; they are redundant, equivalent, or simply uninteresting and waste computational resources. A large body of research has focused on improving the scalability of mutation analysis and proposed numerous optimizations to, e.g., select effective mutants or efficiently execute a large number of tests against a large number of mutants. However, comparatively little research has focused on the costs and benefits of mutation testing, in which mutants are presented as testing goals to a developer, in the context of an industrial-scale software development process. This paper draws on an industrial application of mutation testing, involving 30,000+ developers and 1.9 million change sets, written in 4 programming languages. It shows that mutation testing with productive mutants does not add a significant overhead to the software development process and reports on mutation testing benefits perceived by developers. This paper also quantifies the costs of unproductive mutants, and the results suggest that achieving mutation adequacy is neither practical nor desirable. Finally, this paper describes lessons learned from these studies, highlights the current challenges of efficiently and effectively applying mutation testing in an industrial-scale software development process, and outlines research directions.", "title": "" }, { "docid": "b00c6771f355577437dee2cdd63604b8", "text": "A person gets frustrated when he faces slow speed as many devices are connected to the same network. As the number of people accessing wireless internet increases, it’s going to result in clogged airwaves. Li-Fi is transmission of data through illumination by taking the fiber out of fiber optics by sending data through a LED light bulb that varies in intensity faster than the human eye can follow.", "title": "" }, { "docid": "7a8619e3adf03c8b00a3e830c3f1170b", "text": "We present a robot-pose-registration algorithm, which is entirely based on large planar-surface patches extracted from point clouds sampled from a three-dimensional (3-D) sensor. This approach offers an alternative to the traditional point-to-point iterative-closest-point (ICP) algorithm, its point-to-plane variant, as well as newer grid-based algorithms, such as the 3-D normal distribution transform (NDT). The simpler case of known plane correspondences is tackled first by deriving expressions for least-squares pose estimation considering plane-parameter uncertainty computed during plane extraction. Closed-form expressions for covariances are also derived. To round-off the solution, we present a new algorithm, which is called minimally uncertain maximal consensus (MUMC), to determine the unknown plane correspondences by maximizing geometric consistency by minimizing the uncertainty volume in configuration space. Experimental results from three 3-D sensors, viz., Swiss-Ranger, University of South Florida Odetics Laser Detection and Ranging, and an actuated SICK S300, are given. The first two have low fields of view (FOV) and moderate ranges, while the third has a much bigger FOV and range. Experimental results show that this approach is not only more robust than point- or grid-based approaches in plane-rich environments, but it is also faster, requires significantly less memory, and offers a less-cluttered planar-patches-based visualization.", "title": "" }, { "docid": "e464e7335a4bc1af76d57b158dfcf435", "text": "An elementary way of using language is to refer to objects. Often, these objects are physically present in the shared environment and reference is done via mention of perceivable properties of the objects. This is a type of language use that is modelled well neither by logical semantics nor by distributional semantics, the former focusing on inferential relations between expressed propositions, the latter on similarity relations between words or phrases. We present an account of word and phrase meaning that is perceptually grounded, trainable, compositional, and ‘dialogueplausible’ in that it computes meanings word-by-word. We show that the approach performs well (with an accuracy of 65% on a 1-out-of-32 reference resolution task) on direct descriptions and target/landmark descriptions, even when trained with less than 800 training examples and automatically transcribed utterances.", "title": "" }, { "docid": "1e82e123cacca01a84a8ea2fef641d98", "text": "We propose a new class of convex penalty functions, called variational Gram functions (VGFs), that can promote pairwise relations, such as orthogonality, among a set of vectors in a vector space. These functions can serve as regularizers in convex optimization problems arising from hierarchical classification, multitask learning, and estimating vectors with disjoint supports, among other applications. We study necessary and sufficient conditions under which a VGF is convex, and give a characterization of its subdifferential. We show how to compute its proximal operator, and discuss efficient optimization algorithms for regularized loss minimization problems where the loss admits a simple variational representation and the regularizer is a VGF. We also establish a general representer theorem for such learning problems. Lastly, numerical experiments on a hierarchical classification problem are presented to demonstrate the effectiveness of VGFs and the associated optimization algorithms.", "title": "" }, { "docid": "a3685518bd7248602b6a3143371e4ffc", "text": "The Singular Value Decomposition (SVD) of a matrix is a linear algebra tool that has been successfully applied to a wide variety of domains. The present paper is concerned with the problem of estimating the Jacobian of the SVD components of a matrix with respect to the matrix itself. An exact analytic technique is developed that facilitates the estimation of the Jacobian using calculations based on simple linear algebra. Knowledge of the Jacobian of the SVD is very useful in certain applications involving multivariate regression or the computation of the uncertainty related to estimates obtained through the SVD. The usefulness and generality of the proposed technique is demonstrated by applying it to the estimation of the uncertainty for three different vision problems, namely self-calibration, epipole computation and rigid motion estimation. Key-words: Singular Value Decomposition, Jacobian, Uncertainty, Calibration, Structure from Motion. M. Lourakis was supported by the VIRGO research network (EC Contract No ERBFMRX-CT96-0049) of the TMR Programme. Calcul de la Jacobienne de la Décomposition en Valeurs Singulières: Théorie et applications Résumé : La technique de Décomposition en Valeurs Singulières (SVD) d’une matrice est un outil algèbrique qui a trouvé de nombreuses applications en vision par ordinateur. Dans ce rapport, nous nous intéressons au problème de l’estimation de la jacobienne de la SVD par rapport aux coefficients de la matrice initiale. Cette jacobienne est très utile pour toute une gamme d’applications faisant intervenir des estimations aux moindres carrés (pour lesquelles on utilise la SVD) ou bien des calculs d’incertitude pour des grandeurs estimées de cette manière. Une solution analytique simple à ce problème est présentée. Elle exprime la jacobienne à partir de la SVD de la matrice à l’aide d’opérations très simples d’algèbre linéaire. L’utilité et la généralité de la technique est démontrée en l’appliquant à trois problèmes de vision: l’auto-calibration, le calcul d’épipoles et l’estimation de mouvements rigides. Mots-clés : Décomposition en valeurs singulières, Jacobienne, Incertitude, Calibration, Structure à partir du mouvement. Estimating the Jacobian of the Singular Value Decomposition: Theory and Applications 3", "title": "" }, { "docid": "f456edd4d56dab8f0a60a3cef87f6cdb", "text": "In this paper, we propose Sequential Grouping Networks (SGN) to tackle the problem of object instance segmentation. SGNs employ a sequence of neural networks, each solving a sub-grouping problem of increasing semantic complexity in order to gradually compose objects out of pixels. In particular, the first network aims to group pixels along each image row and column by predicting horizontal and vertical object breakpoints. These breakpoints are then used to create line segments. By exploiting two-directional information, the second network groups horizontal and vertical lines into connected components. Finally, the third network groups the connected components into object instances. Our experiments show that our SGN significantly outperforms state-of-the-art approaches in both, the Cityscapes dataset as well as PASCAL VOC.", "title": "" }, { "docid": "90b913e3857625f3237ff7a47f675fbb", "text": "A new approach for the design of UWB hairpin-comb filters is presented. The filters can be designed to possess broad upper stopband characteristics by controlling the overall size of their resonators. The measured frequency characteristics of implemented UWB filters show potential first spurious passbands centered at about six times the fundamental passband center frequencies.", "title": "" }, { "docid": "2d718fdaecb286ef437b81d2a31383dd", "text": "In this paper, we present a novel non-parametric polygonal approximation algorithm for digital planar curves. The proposed algorithm first selects a set of points (called cut-points) on the contour which are of very ‘high’ curvature. An optimization procedure is then applied to find adaptively the best fitting polygonal approximations for the different segments of the contour as defined by the cut-points. The optimization procedure uses one of the efficiency measures for polygonal approximation algorithms as the objective function. Our algorithm adaptively locates segments of the contour with different levels of details. The proposed algorithm follows the contour more closely where the level of details on the curve is high, while addressing noise by using suppression techniques. This makes the algorithm very robust for noisy, real-life contours having different levels of details. The proposed algorithm performs favorably when compared with other polygonal approximation algorithms using the popular shapes. In addition, the effectiveness of the algorithm is shown by measuring its performance over a large set of handwritten Arabic characters and MPEG7 CE Shape-1 Part B database. Experimental results demonstrate that the proposed algorithm is very stable and robust compared with other algorithms. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "e3d0d40a685d5224084bf350dfb3b59b", "text": "This review analyzes the methods being used and developed in global environmental governance (GEG), an applied field that employs insights and tools from a variety of disciplines both to understand pressing environmental problems and to determine how to address them collectively. We find that methods are often underspecified in GEG research. We undertake a critical review of data collection and analysis in three categories: qualitative, quantitative, and modeling and scenario building. We include examples and references from recent studies to show when and how best to utilize these different methods to conduct problem-driven research. GEG problems are often characterized by institutional and issue complexity, linkages, and multiscalarity that pose challenges for many conventional methodological approaches. As a result, given the large methodological toolbox available to applied researchers, we recommend they adopt a reflective, pluralist, and often collaborative approach when choosing methods appropriate to these challenges. 441 A nn u. R ev . E nv ir on . R es ou rc . 2 01 3. 38 :4 41 -4 71 . D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by P on tif ic ia U ni ve rs id ad J av er ia na o n 12 /1 9/ 13 . F or p er so na l u se o nl y. EG38CH17-ONeill ARI 20 September 2013 14:27", "title": "" }, { "docid": "6f5a3f7ddb99eee445d342e6235280c3", "text": "Although aesthetic experiences are frequent in modern life, there is as of yet no scientifically comprehensive theory that explains what psychologically constitutes such experiences. These experiences are particularly interesting because of their hedonic properties and the possibility to provide self-rewarding cognitive operations. We shall explain why modern art's large number of individualized styles, innovativeness and conceptuality offer positive aesthetic experiences. Moreover, the challenge of art is mainly driven by a need for understanding. Cognitive challenges of both abstract art and other conceptual, complex and multidimensional stimuli require an extension of previous approaches to empirical aesthetics. We present an information-processing stage model of aesthetic processing. According to the model, aesthetic experiences involve five stages: perception, explicit classification, implicit classification, cognitive mastering and evaluation. The model differentiates between aesthetic emotion and aesthetic judgments as two types of output.", "title": "" }, { "docid": "c536e79078d7d5778895e5ac7f02c95e", "text": "Block-based programming languages like Scratch, Alice and Blockly are becoming increasingly common as introductory languages in programming education. There is substantial research showing that these visual programming environments are suitable for teaching programming concepts. But, what do people do when they use Scratch? In this paper we explore the characteristics of Scratch programs. To this end we have scraped the Scratch public repository and retrieved 250,000 projects. We present an analysis of these projects in three different dimensions. Initially, we look at the types of blocks used and the size of the projects. We then investigate complexity, used abstractions and programming concepts. Finally we detect code smells such as large scripts, dead code and duplicated code blocks. Our results show that 1) most Scratch programs are small, however Scratch programs consisting of over 100 sprites exist, 2) programming abstraction concepts like procedures are not commonly used and 3) Scratch programs do suffer from code smells including large scripts and unmatched broadcast signals.", "title": "" }, { "docid": "a43a0f828859cc6f24881d26dacb63e6", "text": "The emergence in the field of fingerprint recognition witness several efficient techniques that propose matching and recognition in less time. The latent fingerprints posed a challenge for such efficient techniques that may deviates results from ideal to worse. The minutiae are considered as a discriminative feature of finger patterns which is assessed in almost every technique for recognition purpose. But in latent patterns such minutiae may be missed or may have contaminated noise. In this paper, we presents such work that demonstrate the solution for latent fingerprints recognition but in ideal time. We also gathered the description about the techniques that have been evaluated on standard NIST Special Dataset (SD)27 of latent fingerprint.", "title": "" }, { "docid": "34508dac189b31c210d461682fed9f67", "text": "Life is more than cat pictures. There are tough days, heartbreak, and hugs. Under what contexts do people share these feelings online, and how do their friends respond? Using millions of de-identified Facebook status updates with poster-annotated feelings (e.g., “feeling thankful” or “feeling worried”), we examine the magnitude and circumstances in which people share positive or negative feelings and characterize the nature of the responses they receive. We find that people share greater proportions of both positive and negative emotions when their friend networks are smaller and denser. Consistent with social sharing theory, hearing about a friend’s troubles on Facebook causes friends to reply with more emotional and supportive comments. Friends’ comments are also more numerous and longer. Posts with positive feelings, on the other hand, receive more likes, and their comments have more positive language. Feelings that relate to the poster’s self worth, such as “feeling defeated,” “feeling unloved,” or “feeling accomplished” amplify these effects.", "title": "" }, { "docid": "31d66211511ae35d71c7055a2abf2801", "text": "BACKGROUND\nPrevious evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training.\n\n\nCONCLUSION/SIGNIFICANCE\nCognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.", "title": "" }, { "docid": "3361e6c7a448e69a73e8b3e879815386", "text": "The neck is not only the first anatomical area to show aging but also contributes to the persona of the individual. The understanding the aging process of the neck is essential for neck rejuvenation. Multiple neck rejuvenation techniques have been reported in the literature. In 1974, Skoog [1] described the anatomy of the superficial musculoaponeurotic system (SMAS) and its role in the aging of the neck. Recently, many patients have expressed interest in minimally invasive surgery with a low risk of complications and short recovery period. The use of thread for neck rejuvenation and the concept of the suture suspension neck lift have become widespread as a convenient and effective procedure; nevertheless, complications have also been reported such as recurrence, inadequate correction, and palpability of the sutures. In this study, we analyzed a new type of thread lift: elastic lift that uses elastic thread (Elasticum; Korpo SRL, Genova, Italy). We already use this new technique for the midface lift and can confirm its efficacy and safety in that context. The purpose of this study was to evaluate the outcomes and safety of the elastic lift technique for neck region lifting.", "title": "" }, { "docid": "fc9b4cb8c37ffefde9d4a7fa819b9417", "text": "Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform gradient based optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods with a significantly reduction of computational resources. Specifically we obtain 2.11% test set error rate for CIFAR-10 image classification task and 56.0 test set perplexity of PTB language modeling task. The best discovered architectures on both tasks are successfully transferred to other tasks such as CIFAR-100 and WikiText-2. Furthermore, combined with the recent proposed weight sharing mechanism, we discover powerful architecture on CIFAR-10 (with error rate 3.53%) and on PTB (with test set perplexity 56.6), with very limited computational resources (less than 10 GPU hours) for both tasks.", "title": "" }, { "docid": "a9f2acbe4bd04abc678316970828ef6d", "text": "— Choosing a university is one of the most important decisions that affects future of young student. This decision requires considering a number of criteria not only numerical but also linguistic. Istanbul is the first alternative for young students' university choice in Turkey. As well as the state universities, the private universities are also so popular in this city. In this paper, a ranking method that manages to choice of university selection is created by using technique for order preference by similarity to ideal solution (TOPSIS) method based on type-2 fuzzy set. This method has been used for ranking private universities in Istanbul.", "title": "" }, { "docid": "78a38e1bdb15fc57d94a1d8ddd330459", "text": "One of the most powerful aspects of biological inquiry using model organisms is the ability to control gene expression. A holy grail is both temporal and spatial control of the expression of specific gene products - that is, the ability to express or withhold the activity of genes or their products in specific cells at specific times. Ideally such a method would also regulate the precise levels of gene activity, and alterations would be reversible. The related goal of controlled or purposefully randomized expression of visible markers is also tremendously powerful. While not all of these feats have been accomplished in Caenorhabditis elegans to date, much progress has been made, and recent technologies put these goals within closer reach. Here, I present published examples of successful two-component site-specific recombination in C. elegans. These technologies are based on the principle of controlled intra-molecular excision or inversion of DNA sequences between defined sites, as driven by FLP or Cre recombinases. I discuss several prospects for future applications of this technology.", "title": "" } ]
scidocsrr
81d208da1f8bc86a369e5608a8e6dd6b
Automated Attack Planning
[ { "docid": "822c41ec0b2da978233d59c8fd871936", "text": "We present a novel POMDP planning algorithm called heuristic search value iteration (HSVI). HSVI is an anytime algorithm that returns a policy and a provable bound on its regret with respect to the optimal policy. HSVI gets its power by combining two well-known techniques: attention-focusing search heuristics and piecewise linear convex representations of the value function. HSVI’s soundness and convergence have been proven. On some benchmark problems from the literature, HSVI displays speedups of greater than 100 with respect to other state-of-the-art POMDP value iteration algorithms. We also apply HSVI to a new rover exploration problem 10 times larger than most POMDP problems in the literature.", "title": "" } ]
[ { "docid": "9b2e025c6bb8461ddb076301003df0e4", "text": "People are sharing their opinions, stories and reviews through online video sharing websites every day. Studying sentiment and subjectivity in these opinion videos is experiencing a growing attention from academia and industry. While sentiment analysis has been successful for text, it is an understudied research question for videos and multimedia content. The biggest setbacks for studies in this direction are lack of a proper dataset, methodology, baselines and statistical analysis of how information from different modality sources relate to each other. This paper introduces to the scientific community the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinionlevel Sentiment Intensity dataset (MOSI). The dataset is rigorously annotated with labels for subjectivity, sentiment intensity, per-frame and per-opinion annotated visual features, and per-milliseconds annotated audio features. Furthermore, we present baselines for future studies in this direction as well as a new multimodal fusion approach that jointly models spoken words and visual gestures.", "title": "" }, { "docid": "f8a89a023629fa9bcb2c3566b6817b0c", "text": "In this paper, we propose a robust on-the-fly estimator initialization algorithm to provide high-quality initial states for monocular visual-inertial systems (VINS). Due to the non-linearity of VINS, a poor initialization can severely impact the performance of either filtering-based or graph-based methods. Our approach starts with a vision-only structure from motion (SfM) to build the up-to-scale structure of camera poses and feature positions. By loosely aligning this structure with pre-integrated IMU measurements, our approach recovers the metric scale, velocity, gravity vector, and gyroscope bias, which are treated as initial values to bootstrap the nonlinear tightly-coupled optimization framework. We highlight that our approach can perform on-the-fly initialization in various scenarios without using any prior information about system states and movement. The performance of the proposed approach is verified through the public UAV dataset and real-time onboard experiment. We make our implementation open source, which is the initialization part integrated in the VINS-Mono1.", "title": "" }, { "docid": "d4f4939967b69eec9af8252759074820", "text": "Kernel methods are ubiquitous tools in machine learning. However, there is often little reason for the common practice of selecting a kernel a priori. Even if a universal approximating kernel is selected, the quality of the finite sample estimator may be greatly affected by the choice of kernel. Furthermore, when directly applying kernel methods, one typically needs to compute a N×N Gram matrix of pairwise kernel evaluations to work with a dataset of N instances. The computation of this Gram matrix precludes the direct application of kernel methods on large datasets, and makes kernel learning especially difficult. In this paper we introduce Bayesian nonparmetric kernel-learning (BaNK), a generic, data-driven framework for scalable learning of kernels. BaNK places a nonparametric prior on the spectral distribution of random frequencies allowing it to both learn kernels and scale to large datasets. We show that this framework can be used for large scale regression and classification tasks. Furthermore, we show that BaNK outperforms several other scalable approaches for kernel learning on a variety of real world datasets.", "title": "" }, { "docid": "a1e5885f0bc2feda1454f34efbcbedb2", "text": "tronomy. It is common practice for manufacturers of image acquisition devices to include dedicated image processing software, but these programs are usually not very flexible and/or do not allow more complex image manipulations. Image processing programs also are available by themselves. ImageJ holds a unique position because T he advances of the medical and biological sciences over recent years, and the growing importance of determining the relationships between structure and function, have made imaging an increasingly important discipline. The ubiquitousness of digital technology — from banal digital cameras to highly specific micro-CT scanners — has made images an essential part of a number of reAs the popularity of the ImageJ open-source, Java-based imaging program grows, its capabilities increase, too. It is now being used for imaging applications ranging from skin analysis to neuroscience. by Dr. Michael D. Abràmoff, University of Iowa Hospitals and Clinics; Dr. Paulo J. Magalhães, University of Padua; and Dr. Sunanda J. Ram, Louisiana State University Health Sciences Center Image Processing with ImageJ", "title": "" }, { "docid": "6543f2be14582b0c4d3fbd3185bc7771", "text": "Vital signs (i.e., heartbeat and respiration) are crucial physiological signals that are useful in numerous medical applications. The process of measuring these signals should be simple, reliable, and comfortable for patients. In this paper, a noncontact self-calibrating vital signs monitoring system based on the Doppler radar is presented. The system hardware and software were designed with a four-tiered layer structure. To enable accurate vital signs measurement, baseband signals in the radar sensor were modeled and a framework for signal demodulation was proposed. Specifically, a signal model identification method was formulated into a quadratically constrained l1 minimization problem and solved using the upper bound and linear matrix inequality (LMI) relaxations. The performance of the proposed system was comprehensively evaluated using three experimental sets, and the results indicated that this system can be used to effectively measure human vital signs.", "title": "" }, { "docid": "095f4ea337421d6e1310acf73977fdaa", "text": "We consider the problem of autonomous robotic laundry folding, and propose a solution to the perception and manipulation challenges inherent to the task. At the core of our approach is a quasi-static cloth model which allows us to neglect the complex dynamics of cloth under significant parts of the state space, allowing us to reason instead in terms of simple geometry. We present an algorithm which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers. We define parametrized fold sequences for four clothing categories: towels, pants, short-sleeved shirts, and long-sleeved shirts, each represented as polygons. We then devise a model-based optimization approach for visually inferring the class and pose of a spread-out or folded clothing article from a single image, such that the resulting polygon provides a parse suitable for these folding primitives. We test the manipulation and perception tasks individually, and combine them to implement an autonomous folding system on the Willow Garage PR2. This enables the PR2 to identify a clothing article spread out on a table, execute the computed folding sequence, and visually track its progress over successive folds.", "title": "" }, { "docid": "6a74c2d26f5125237929031cf1ccf204", "text": "Harnessing crowds can be a powerful mechanism for increasing innovation. However, current approaches to crowd innovation rely on large numbers of contributors generating ideas independently in an unstructured way. We introduce a new approach called distributed analogical idea generation, which aims to make idea generation more effective and less reliant on chance. Drawing from the literature in cognitive science on analogy and schema induction, our approach decomposes the creative process in a structured way amenable to using crowds. In three experiments we show that distributed analogical idea generation leads to better ideas than example-based approaches, and investigate the conditions under which crowds generate good schemas and ideas. Our results have implications for improving creativity and building systems for distributed crowd innovation.", "title": "" }, { "docid": "20f4bcde35458104271e9127d8b7f608", "text": "OBJECTIVES\nTo evaluate the effect of bulk-filling high C-factor posterior cavities on adhesion to cavity-bottom dentin.\n\n\nMETHODS\nA universal flowable composite (G-ænial Universal Flo, GC), a bulk-fill flowable base composite (SDR Posterior Bulk Fill Flowable Base, Dentsply) and a conventional paste-like composite (Z100, 3M ESPE) were bonded (G-ænial Bond, GC) into standardized cavities with different cavity configurations (C-factors), namely C=3.86 (Class-I cavity of 2.5mm deep, bulk-filled), C=5.57 (Class-I cavity of 4mm deep, bulk-filled), C=1.95 (Class-I cavity of 2.5mm deep, filled in three equal layers) and C=0.26 (flat surface). After one-week water storage, the restorations were sectioned in 4 rectangular micro-specimens and subjected to a micro-tensile bond strength (μTBS) test.\n\n\nRESULTS\nHighly significant differences were found between pairs of means of the experimental groups (Kruskal-Wallis, p<0.0001). Using the bulk-fill flowable base composite SDR (Dentsply), no significant differences in μTBS were measured among all cavity configurations (p>0.05). Using the universal flowable composite G-ænial Universal Flo (GC) and the conventional paste-like composite Z100 (3M ESPE), the μTBS to cavity-bottom dentin was not significantly different from that of SDR (Dentsply) when the cavities were layer-filled or the flat surface was build up in layers; it was however significantly lower when the Class-I cavities were filled in bulk, irrespective of cavity depth.\n\n\nSIGNIFICANCE\nThe filling technique and composite type may have a great impact on the adhesion of the composite, in particular in high C-factor cavities. While the bulk-fill flowable base composite provided satisfactory bond strengths regardless of filling technique and cavity depth, adhesion failed when conventional composites were used in bulk.", "title": "" }, { "docid": "531d387a14eefa6a8c45ad64039f29be", "text": "This paper presents an S-Transform based probabilistic neural network (PNN) classifier for recognition of power quality (PQ) disturbances. The proposed method requires less number of features as compared to wavelet based approach for the identification of PQ events. The features extracted through the S-Transform are trained by a PNN for automatic classification of the PQ events. Since the proposed methodology can reduce the features of the disturbance signal to a great extent without losing its original property, less memory space and learning PNN time are required for classification. Eleven types of disturbances are considered for the classification problem. The simulation results reveal that the combination of S-Transform and PNN can effectively detect and classify different PQ events. The classification performance of PNN is compared with a feedforward multilayer (FFML) neural network (NN) and learning vector quantization (LVQ) NN. It is found that the classification performance of PNN is better than both FFML and LVQ.", "title": "" }, { "docid": "0da1479719e63aa92d280dc627f3439d", "text": "This paper presents a low cost, precise and reliable inductive absolute position measurement system. It is suitable for rough industrial environments, offers a high inherent resolution (0.1 % to 0.01 % of antenna length), can measure target position over a wide measurement range and can potentially measure multiple target locations. The position resolution is improved by adding two additional finer pitched receive channels. The sensor works on principles similar to contactless resolvers. It consists of a rectangular antenna PCB and a passive LC resonance target. A mathematical model and the equivalent circuit of this kind of sensor is explained in detail. Such sensors suffer from transmitter to receiver coil capacitive crosstalk, which results in a phase sensitive offset. This crosstalk will be analyzed by a mathematical model and will be verified by measurements. Moreover, the mechanical transducer arrangement, the measurement setup and measured results will be presented.", "title": "" }, { "docid": "d87f336cc82cbd29df1f04095d98a7fb", "text": "The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades. Moreover, recent studies indicate that these metrics have become targets and follow Goodhart’s Law, according to which “when a measure becomes a target, it ceases to be a good measure.” In this study, we analyzed over 120 million papers to examine how the academic publishing world has evolved over the last century. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening. In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists. Measures such as a journal’s impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors. Moreover, by analyzing properties of over 2600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department. Academic publishing has changed considerably; now we need to reconsider how we measure success. Multimedia Links I Interactive Data Visualization I Code Tutorials I Fields-of-Study Features Table", "title": "" }, { "docid": "8f917c8bde6f775c7421e72563abc34c", "text": "Cognitive radio techniques allow secondary users (SU's) to opportunistically access underutilized primary channels that are licensed to primary users. We consider a group of SU's with limited spectrum sensing capabilities working cooperatively to find primary channel spectrum holes. The objective is to design the optimal sensing and access policies that maximize the total secondary throughput on primary channels accrued over time. Although the problem can be formulated as a Partially Observable Markov Decision Process (POMDP), the optimal solutions are intractable. Instead, we find the optimal sensing policy within the class of myopic policies. Compared to other existing approaches, our policy is more realistic because it explicitly assigns SU's to sense specific primary channels by taking into account spatial and temporal variations of primary channels. Contributions: (1) formulation of a centralized spectrum sensing/access architecture that allows exploitation of all available primary spectrum holes; and (2) proposing sub-optimal myopic sensing policies with low-complexity implementations and performance close to the myopic policy. We show that our proposed sensing/access policy is close to the optimal POMDP solution and outperforms other proposed strategies. We also propose a Hidden Markov Model based algorithm to estimate the parameters of primary channel Markov models with a linear complexity.", "title": "" }, { "docid": "a1b20560bbd6124db8fc8b418cd1342c", "text": "Feature selection is often an essential data processing step prior to applying a learning algorithm The re moval of irrelevant and redundant information often improves the performance of machine learning algo rithms There are two common approaches a wrapper uses the intended learning algorithm itself to evaluate the usefulness of features while a lter evaluates fea tures according to heuristics based on general charac teristics of the data The wrapper approach is generally considered to produce better feature subsets but runs much more slowly than a lter This paper describes a new lter approach to feature selection that uses a correlation based heuristic to evaluate the worth of fea ture subsets When applied as a data preprocessing step for two common machine learning algorithms the new method compares favourably with the wrapper but re quires much less computation", "title": "" }, { "docid": "9b3a9613406bd15cf6d14861ee67a144", "text": "Introduction. Electrical stimulation is used in experimental human pain models. The aim was to develop a model that visualizes the distribution of electrical field in the esophagus close to ring and patch electrodes mounted on an esophageal catheter and to explain the obtained sensory responses. Methods. Electrical field distribution in esophageal layers (mucosa, muscle layers, and surrounding tissue) was computed using a finite element model based on a 3D model. Each layer was assigned different electrical properties. An electrical field exceeding 20 V/m was considered to activate the esophageal afferents. Results. The model output showed homogeneous and symmetrical field surrounding ring electrodes compared to a saddle-shaped field around patch electrodes. Increasing interelectrode distance enlarged the electrical field in muscle layer. Conclusion. Ring electrodes with 10 mm interelectrode distance seem optimal for future catheter designs. Though the model needs further validation, the results seem useful for electrode designs and understanding of electrical stimulation patterns.", "title": "" }, { "docid": "23670ac6fb88e2f5d3a31badc6dc38f9", "text": "The purpose of this review article is to report on the recent developments and the performance level achieved in the strained-Si/SiGe material system. In the first part, the technology of the growth of a high-quality strained-Si layer on a relaxed, linear or step-graded SiGe buffer layer is reviewed. Characterization results of strained-Si films obtained with secondary ion mass spectroscopy, Rutherford backscattering spectroscopy, atomic force microscopy, spectroscopic ellipsometry and Raman spectroscopy are presented. Techniques for the determination of bandgap parameters from electrical characterization of metal–oxide–semiconductor (MOS) structures on strained-Si film are discussed. In the second part, processing issues of strained-Si films in conventional Si technology with low thermal budget are critically reviewed. Thermal and low-temperature microwave plasma oxidation and nitridation of strained-Si layers are discussed. Some recent results on contact metallization of strained-Si using Ti and Pt are presented. In the last part, device applications of strained Si with special emphasis on heterostructure metal oxide semiconductor field effect transistors and modulation-doped field effect transistors are discussed. Design aspects and simulation results of nand p-MOS devices with a strained-Si channel are presented. Possible future applications of strained-Si/SiGe in high-performance SiGe CMOS technology are indicated.", "title": "" }, { "docid": "03d02a52eb1ed03a61fe05668cfe8166", "text": "The complexity of the world around us is creating a demand for novel interfaces that will simplify and enhance the way we interact with the environment. The recently unveiled Android Wear operating system addresses this demand by providing a modern system for all those companies that are now developing wearable devices, also known as \"wearables\". Wearability of robotic devices will enable novel forms of human intention recognition through haptic signals and novel forms of communication between humans and robots. Specifically, wearable haptics will enable devices to communicate with humans during their interaction with the environment they share. Wearable haptic technology have been introduced in our everyday life by Sony. In 1997 its DualShock controller for PlayStation revolutionized the gaming industry by introducing a simple but effective vibrotactile feedback. More recently, Apple unveiled the Apple Watch, which embeds a linear actuator that can make the watch vibrate. It is used whenever the wearer receives an alert or notification, or to communicate with other Apple Watch owners.", "title": "" }, { "docid": "3db4d7a83afbbadbafe3d1c4fddf51a0", "text": "A Successive approximation analog to digital converter (ADC) for data acquisition using fully CMOS high speed self-biased comparator circuit is discussed in this paper. ASIC finds greater demand when area and speed optimization are major concern and here the entire optimized design is done in CADENCE virtuoso EDA tool in 180nm technology. Towerjazz semiconductor foundry is the base for layout design and GDSII extraction. Comparison of different DAC architecture and the precise architecture with minimum DNL and INL are chosen for the design procedure. This paper describes the design of a fully customized 9 bit SAR ADC with input voltage ranging from 0 to 2.5V and sampling frequency 16.67 KHz. Hspice simulators is used for the simulations. Keywords— SAR ADC, Comparator, CADENCE, CMOS, DAC. [1] INTRODUCTION With the development of sensors, portable devices and high speed computing systems, comparable growth is seen in the optimization of Analog to digital converters (ADC) to assist in the technology growth. All the natural signals are analog and the present digital world require the signal in digital format for storing, processing and transmitting and thereby ADC becomes an integral part of almost all electronic devices 8 . This leads to the need for power, area and speed optimized design of ADCs. There are different ADC architectures like Flash ADC, SAR ADC, sigma-delta ADC etc., with each having its own pros and cons. The designer selects the desired architecture according to the requirements 1 . Flash ADC is the fasted ADC structure where the output is obtained in a single cycle but requires a large number of resistors and comparators for the design. For an N bit 2 flash ADC 2 N resistors and 2 N-1 comparators are required consuming large amount of area and power. Modifications are done on flash ADC to form pipelined flash ADC where the number of components can be reduced but the power consumption cannot be further reduced beyond a level. Sigma-delta ADC or integrating type of ADC is used when the resolution required is very high. This is the slowest architecture compared to other architectures. Design of sigma-delta requires analog design of integrator circuit making its design complex. SAR ADC architecture gives the output in N cycles for an N-bit ADC. SAR ADC being one of the pioneer ADC architecture is been commonly used due to its good trade-off between area, power and speed, which is the required criteria for CMOS deep submicron circuits. SAR ADC consists of a Track and Hold (TH) circuit, comparator, DAC and a SAR register and control logic. Figure 1 shows the block diagram of a SAR ADC. This paper is organized into six sections. Section II describes the analog design of TH and comparator. Section III compares the DAC architecture. Section IV explains the SAR logic. Section V gives the simulation results and section VI is the conclusion. Fig 1 Block Diagram of SAR ADC [2] ANALOG DESIGN OF TH AND COMPARATOR A. Track and Hold In general, Sample and hold circuit or Track and Hold contain a switch and a capacitor. In the tracking mode, when the sampling signal (strobe pulse) is high and the switch is connected, it tracks the analog input signal 3 . Then, it holds the value when the sampling signal turns to low in the hold mode. In this case, sample and hold provides a constant voltage at the input of the ADC during conversion 7 . Figure 2 shows a simple Track and hold Vol 05, Article 11492, November 2014 International Journal of VLSI and Embedded Systems-IJVES http://ijves.com ISSN: 2249 – 6556 2010-2014 – IJVES Indexing in Process EMBASE, EmCARE, Electronics & Communication Abstracts, SCIRUS, SPARC, GOOGLE Database, EBSCO, NewJour, Worldcat, DOAJ, and other major databases etc., 1392 circuit with a NMOS transistor as switch. The capacitance value is selected as 100pF and aspect ratio of the transistor as 28 based on the design steps. Fig 2 Track and hold circuit B. Latched comparator Comparator with high resolution and high speed is the desired design criteria and here dynamic latched comparator topology and self-biased open loop comparator topology are studied and implemented. From the comparison results, the best topology considering speed and better resolution is selected. Figure 3 shows a latched comparator. Static latch consumes static power which is not attractive for low power applications. A major disadvantage of latch is low resolution. Fig 3Latched comparator C. Self-biased open loop comparator A self-biased open loop comparator is a differential input high gain amplifier with an output stage. A currentmirror acts as the load for the differential pair and converts the double ended circuit to a single ended. Since precise gain is not required for comparator circuit, no compensation techniques are required 4 . Figure 4 shows a self-biased open loop comparator. Schematic of the circuit implementation and simulation result shows that selfbiased open loop comparator has better speed of operation compared to latched comparator. The simulation results are tabulated below in table 1. Thought there are two capacitors in open loop comparator resulting in more power consumption, speed of operation and resolution is better compared to latched comparator. So open loop comparator circuit is selected for the design advancement. Both the comparator design is done based of a specific output current and slew rate. Fig 4 Self-biased open loop comparator Conversion time No of transistors Resolution Power Latched comparator 426.6ns 11 4mv 80nw Self-biased open loop comparator 712.7ns 10 15mv 58nw Table 1 Comparator simulation results [3] DAC ARCHITECTURE A. R-2R DAC The digital data bits are entered through the input lines (d0 to d(N-1)) which is to be converted to an equivalent analog voltage (Vout) using R/2R resistor network 5 . The R/2R network is built by a set of resistors of two Vol 05, Article 11492, November 2014 International Journal of VLSI and Embedded Systems-IJVES http://ijves.com ISSN: 2249 – 6556 2010-2014 – IJVES Indexing in Process EMBASE, EmCARE, Electronics & Communication Abstracts, SCIRUS, SPARC, GOOGLE Database, EBSCO, NewJour, Worldcat, DOAJ, and other major databases etc., 1393 values, with values of one sets being twice of the other. Here for simulation purpose 1K and 2K resistors are used, there by resulting R/2R ratio. Accuracy or precision of DAC depends on the values of resistors chosen, higher precision can be obtained with an exact match of the R/2R ratio. B. C-2C DAC The schematic diagram of 3bit C2C ladder is shown in figure 4.3 which is similar to that of the R2R type. The capacitor value selected as 20 fF and 40 fF for C and 2C respectively such that the impedance value of C is twice that of 2C. C. Charge scaling DAC The voltage division principle is same as that of C-2C 6 . The value of unit capacitance is selected as 20fF for the simulation purpose. In order to obtain precision between the capacitance value parallel combinations of unit capacitance is implemented for the binary weighted value. Compared to C-2C the capacitance area is considerably large. DAC type Integral Non-Linearity INL Differential Non-Linearity DNL Offset", "title": "" }, { "docid": "8a33040d6464f7792b3eeee1e0760925", "text": "We live in a data abundance era. Availability of large volume of diverse multimedia data streams (ranging from video, to tweets, to activity, and to PM2.5) can now be used to solve many critical societal problems. Causal modeling across multimedia data streams is essential to reap the potential of this data. However, effective frameworks combining formal abstract approaches with practical computational algorithms for causal inference from such data are needed to utilize available data from diverse sensors. We propose a causal modeling framework that builds on data-driven techniques while emphasizing and including the appropriate human knowledge in causal inference. We show that this formal framework can help in designing a causal model with a systematic approach that facilitates framing sharper scientific questions, incorporating expert's knowledge as causal assumptions, and evaluating the plausibility of these assumptions. We show the applicability of the framework in a an important Asthma management application using meteorological and pollution data streams.", "title": "" }, { "docid": "390ebc9975960ff7a817efc8412bd8da", "text": "OBJECTIVE\nPhysical activity is critical for health, yet only about half of the U.S. adult population meets basic aerobic physical activity recommendations and almost a third are inactive. Mindfulness meditation is gaining attention for its potential to facilitate health-promoting behavior and may address some limitations of existing interventions for physical activity. However, little evidence exists on mindfulness meditation and physical activity. This study assessed whether mindfulness meditation is uniquely associated with physical activity in a nationally representative sample.\n\n\nMETHOD\nCross-sectional data from the adult sample (N = 34,525) of the 2012 National Health Interview Survey were analyzed. Logistic regression models tested whether past-year use of mindfulness meditation was associated with (a) inactivity and (b) meeting aerobic physical activity recommendations, after accounting for sociodemographics, another health-promoting behavior, and 2 other types of meditation. Data were weighted to represent the U.S. civilian, noninstitutionalized adult population.\n\n\nRESULTS\nAccounting for covariates, U.S. adults who practiced mindfulness meditation in the past year were less likely to be inactive and more likely to meet physical activity recommendations. Mindfulness meditation showed stronger associations with these indices of physical activity than the 2 other types of meditation.\n\n\nCONCLUSIONS\nThese results suggest that mindfulness meditation specifically, beyond meditation in general, is associated with physical activity in U.S adults. Future research should test whether intervening with mindfulness meditation-either as an adjunctive component or on its own-helps to increase or maintain physical activity. (PsycINFO Database Record", "title": "" } ]
scidocsrr
804095b9fb79beead40386361f793579
ISLES 2015 - A public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI
[ { "docid": "15ef258e08dcc0fe0298c089fbf5ae1c", "text": "In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.", "title": "" } ]
[ { "docid": "f30caea55cb1800a569a2649d1f8e388", "text": "Naive Bayes (NB) is a popular machine learning tool for classification, due to its simplicity, high computational efficiency, and good classification accuracy, especially for high dimensional data such as texts. In reality, the pronounced advantage of NB is often challenged by the strong conditional independence assumption between attributes, which may deteriorate the classification performance. Accordingly, numerous efforts have been made to improve NB, by using approaches such as structure extension, attribute selection, attribute weighting, instance weighting, local learning and so on. In this paper, we propose a new Artificial Immune System (AIS) based self-adaptive attribute weighting method for Naive Bayes classification. The proposed method, namely AISWNB, uses immunity theory in artificial immune systems to search optimal attribute weight values, where self-adjusted weight values will alleviate the conditional independence assumption and help calculate the conditional probability in an accurate way. One noticeable advantage of AISWNB is that the unique immune system based evolutionary computation process, including initialization, clone, section, and mutation, ensures that AISWNB can adjust itself to the data without explicit specification of functional or distributional forms of the underlying model. As a result, AISWNB can obtain good attribute weight values during the learning process. Experiments and comparisons on 36 machine learning benchmark data sets and six image classification data sets demonstrate that AISWNB significantly outperforms its peers in classification accuracy, class probability estimation, and class ranking performance.", "title": "" }, { "docid": "19075b16bbae94d024e4cdeaa7f6427e", "text": "Nutrient timing is a popular nutritional strategy that involves the consumption of combinations of nutrients--primarily protein and carbohydrate--in and around an exercise session. Some have claimed that this approach can produce dramatic improvements in body composition. It has even been postulated that the timing of nutritional consumption may be more important than the absolute daily intake of nutrients. The post-exercise period is widely considered the most critical part of nutrient timing. Theoretically, consuming the proper ratio of nutrients during this time not only initiates the rebuilding of damaged muscle tissue and restoration of energy reserves, but it Does So in a supercompensated fashion that enhances both body composition and exercise performance. Several researchers have made reference to an anabolic \"window of opportunity\" whereby a limited time exists after training to optimize training-related muscular adaptations. However, the importance - and even the existence - of a post-exercise 'window' can vary according to a number of factors. Not only is nutrient timing research open to question in terms of applicability, but recent evidence has directly challenged the classical view of the relevance of post-exercise nutritional intake with respect to anabolism. Therefore, the purpose of this paper will be twofold: 1) to review the existing literature on the effects of nutrient timing with respect to post-exercise muscular adaptations, and; 2) to draw relevant conclusions that allow practical, evidence-based nutritional recommendations to be made for maximizing the anabolic response to exercise.", "title": "" }, { "docid": "e1b39e972eff71eb44b39f37e7a7b2f3", "text": "The maximum mean discrepancy (MMD) is a recently proposed test statistic for the two-sample test. Its quadratic time complexity, however, greatly hampers its availability to large-scale applications. To accelerate the MMD calculation, in this study we propose an efficient method called FastMMD. The core idea of FastMMD is to equivalently transform the MMD with shift-invariant kernels into the amplitude expectation of a linear combination of sinusoid components based on Bochner’s theorem and Fourier transform (Rahimi & Recht, 2007). Taking advantage of sampling the Fourier transform, FastMMD decreases the time complexity for MMD calculation from to , where N and d are the size and dimension of the sample set, respectively. Here, L is the number of basis functions for approximating kernels that determines the approximation accuracy. For kernels that are spherically invariant, the computation can be further accelerated to by using the Fastfood technique (Le, Sarlós, & Smola, 2013). The uniform convergence of our method has also been theoretically proved in both unbiased and biased estimates. We also provide a geometric explanation for our method, ensemble of circular discrepancy, which helps us understand the insight of MMD and we hope will lead to more extensive metrics for assessing the two-sample test task. Experimental results substantiate that the accuracy of FastMMD is similar to that of MMD and with faster computation and lower variance than existing MMD approximation methods.", "title": "" }, { "docid": "718e31eabfd386768353f9b75d9714eb", "text": "The mathematical structure of Sudoku puzzles is akin to hard constraint satisfaction problems lying at the basis of many applications, including protein folding and the ground-state problem of glassy spin systems. Via an exact mapping of Sudoku into a deterministic, continuous-time dynamical system, here we show that the difficulty of Sudoku translates into transient chaotic behavior exhibited by this system. We also show that the escape rate κ, an invariant of transient chaos, provides a scalar measure of the puzzle's hardness that correlates well with human difficulty ratings. Accordingly, η = -log₁₀κ can be used to define a \"Richter\"-type scale for puzzle hardness, with easy puzzles having 0 < η ≤ 1, medium ones 1 < η ≤ 2, hard with 2 < η ≤ 3 and ultra-hard with η > 3. To our best knowledge, there are no known puzzles with η > 4.", "title": "" }, { "docid": "002abd54753db9928d8e6832d3358084", "text": "State-of-the-art semantic role labelling systems require large annotated corpora to achieve full performance. Unfortunately, such corpora are expensive to produce and often do not generalize well across domains. Even in domain, errors are often made where syntactic information does not provide sufficient cues. In this paper, we mitigate both of these problems by employing distributional word representations gathered from unlabelled data. While straight-forward word representations of predicates and arguments improve performance, we show that further gains are achieved by composing representations that model the interaction between predicate and argument, and capture full argument spans.", "title": "" }, { "docid": "948257544ca485b689d8663aaba63c5d", "text": "This paper presents a new single-pass shadow mapping technique that achieves better quality than the approaches based on perspective warping, such as perspective, light-space, and trapezoidal shadow maps. The proposed technique is appropriate for real-time rendering of large virtual environments that include dynamic objects. By performing operations in camera space, this solution successfully handles the general and the dueling frustum cases and produces high-quality shadows even for extremely large scenes. This paper also presents a fast nonlinear projection technique for shadow map stretching that enables complete utilization of the shadow map by eliminating wastage. The application of stretching results in a significant reduction in unwanted perspective aliasing, commonly found in all shadow mapping techniques. Technique is compared with other shadow mapping techniques, and the benefits of the proposed method are presented. The proposed shadow mapping technique is simple and flexible enough to handle most of the special scenarios. An API for a generic shadow mapping solution is presented. This API simplifies the generation of fast and high-quality shadows.", "title": "" }, { "docid": "c6dd897653486add8699828a2a1f9ffb", "text": "Everyone wants to know one thing about a test suite: will it detect enough bugs? Unfortunately, in most settings that matter, answering this question directly is impractical or impossible. Software engineers and researchers therefore tend to rely on various measures of code coverage (where mutation testing is considered a form of syntactic coverage). A long line of academic research efforts have attempted to determine whether relying on coverage as a substitute for fault detection is a reasonable solution to the problems of test suite evaluation. This essay argues that the profusion of coverage-related literature is in part a sign of an underlying uncertainty as to what exactly it is that measuring coverage should achieve, as well as how we would know if it can, in fact, achieve it. We propose some solutions and mitigations, but the primary focus of this essay is to clarify the state of current confusions regarding this key problem for effective software testing.", "title": "" }, { "docid": "7462f38fa4f99595bdb04a4519f7d9e9", "text": "The use of Unmanned Aerial Vehicles (UAV) has been increasing over the last few years in many sorts of applications due mainly to the decreasing cost of this technology. One can see the use of the UAV in several civilian applications such as surveillance and search and rescue. Automatic detection of pedestrians in aerial images is a challenging task. The computing vision system must deal with many sources of variability in the aerial images captured with the UAV, e.g., low-resolution images of pedestrians, images captured at distinct angles due to the degrees of freedom that a UAV can move, the camera platform possibly experiencing some instability while the UAV flies, among others. In this work, we created and evaluated different implementations of Pattern Recognition Systems (PRS) aiming at the automatic detection of pedestrians in aerial images captured with multirotor UAV. The main goal is to assess the feasibility and suitability of distinct PRS implementations running on top of low-cost computing platforms, e.g., single-board computers such as the Raspberry Pi or regular laptops without a GPU. For that, we used four machine learning techniques in the feature extraction and classification steps, namely Haar cascade, LBP cascade, HOG + SVM and Convolutional Neural Networks (CNN). In order to improve the system performance (especially the processing time) and also to decrease the rate of false alarms, we applied the Saliency Map (SM) and Thermal Image Processing (TIP) within the segmentation and detection steps of the PRS. The classification results show the CNN to be the best technique with 99.7% accuracy, followed by HOG + SVM with 92.3%. In situations of partial occlusion, the CNN showed 71.1% sensitivity, which can be considered a good result in comparison with the current state-of-the-art, since part of the original image data is missing. As demonstrated in the experiments, by combining TIP with CNN, the PRS can process more than two frames per second (fps), whereas the PRS that combines TIP with HOG + SVM was able to process 100 fps. It is important to mention that our experiments show that a trade-off analysis must be performed during the design of a pedestrian detection PRS. The faster implementations lead to a decrease in the PRS accuracy. For instance, by using HOG + SVM with TIP, the PRS presented the best performance results, but the obtained accuracy was 35 percentage points lower than the CNN. The obtained results indicate that the best detection technique (i.e., the CNN) requires more computational resources to decrease the PRS computation time. Therefore, this work shows and discusses the pros/cons of each technique and trade-off situations, and hence, one can use such an analysis to improve and tailor the design of a PRS to detect pedestrians in aerial images.", "title": "" }, { "docid": "d79f92819d5485f2631897befd686416", "text": "Information visualization is meant to support the analysis and comprehension of (often large) datasets through techniques intended to show/enhance features, patterns, clusters and trends, not always visible even when using a graphical representation. During the development of information visualization techniques the designer has to take into account the users' tasks to choose the graphical metaphor as well as the interactive methods to be provided. Testing and evaluating the usability of information visualization techniques are still a research question, and methodologies based on real or experimental users often yield significant results. To be comprehensive, however, experiments with users must rely on a set of tasks that covers the situations a real user will face when using the visualization tool. The present work reports and discusses the results of three case studies conducted as Multi-dimensional In-depth Long-term Case studies. The case studies were carried out to investigate MILCs-based usability evaluation methods for visualization tools.", "title": "" }, { "docid": "9ce1401e072fc09749d12f9132aa6b1e", "text": "In many applications based on the use of unmanned aerial vehicles (UAVs), it is possible to establish a cluster of UAVs in which each UAV knows the other vehicle's position. Assuming that the common channel condition between any two nodes of UAVs is line-of-sight (LOS), the time and energy consumption for data transmission on each path that connecting two nodes may be estimated by a node itself. In this paper, we use a modified Bellman-Ford algorithm to find the best selection of relay nodes in order to minimize the time and energy consumption for data transmission between any UAV node in the cluster and the UAV acting as the cluster head. This algorithm is applied with a proposed cooperative MAC protocol that is compatible with the IEEE 802.11 standard. The evaluations under data saturation conditions illustrate noticeable benefits in successful packet delivery ratio, average delay, and in particular the cost of time and energy.", "title": "" }, { "docid": "55dee5bdc4ff8225ef3997616af92320", "text": "Clustered regularly interspaced short palindromic repeats (CRISPR) are hypervariable loci widely distributed in prokaryotes that provide acquired immunity against foreign genetic elements. Here, we characterize a novel Streptococcus thermophilus locus, CRISPR3, and experimentally demonstrate its ability to integrate novel spacers in response to bacteriophage. Also, we analyze CRISPR diversity and activity across three distinct CRISPR loci in several S. thermophilus strains. We show that both CRISPR repeats and cas genes are locus specific and functionally coupled. A total of 124 strains were studied, and 109 unique spacer arrangements were observed across the three CRISPR loci. Overall, 3,626 spacers were analyzed, including 2,829 for CRISPR1 (782 unique), 173 for CRISPR2 (16 unique), and 624 for CRISPR3 (154 unique). Sequence analysis of the spacers revealed homology and identity to phage sequences (77%), plasmid sequences (16%), and S. thermophilus chromosomal sequences (7%). Polymorphisms were observed for the CRISPR repeats, CRISPR spacers, cas genes, CRISPR motif, locus architecture, and specific sequence content. Interestingly, CRISPR loci evolved both via polarized addition of novel spacers after exposure to foreign genetic elements and via internal deletion of spacers. We hypothesize that the level of diversity is correlated with relative CRISPR activity and propose that the activity is highest for CRISPR1, followed by CRISPR3, while CRISPR2 may be degenerate. Globally, the dynamic nature of CRISPR loci might prove valuable for typing and comparative analyses of strains and microbial populations. Also, CRISPRs provide critical insights into the relationships between prokaryotes and their environments, notably the coevolution of host and viral genomes.", "title": "" }, { "docid": "4f6f225f978bbf00c20f80538dc12aad", "text": "A smart building is created when it is engineered, delivered and operated smart. The Internet of Things (IoT) is advancing a new breed of smart buildings enables operational systems that deliver more accurate and useful information for improving operations and providing the best experiences for tenants. Big Data Analytics framework analyze building data to uncover new insight capable of driving real value and greater performance. Internet of Things technologies enhance the situational awareness or “smartness” of service providers and consumers alike. There is a need for an integrated IoT Big Data Analytics framework to fill the research gap in the Big Data Analytics domain. This paper also presents a novel approach for mobile phone centric observation applied to indoor localization for smart buildings. The applicability of the framework of this paper is demonstrated with the help of a scenario involving the analysis of real-time smart building data for automatically managing the oxygen level, luminosity and smoke/hazardous gases in different parts of the smart building. Lighting control in smart buildings and homes can be automated by having computer controlled lights and blinds along with illumination sensors that are distributed in the building. This paper gives an overview of an approach that algorithmically sets up the control system that can automate any building without custom programming. The resulting system controls blinds to ensure even lighting and also adds artificial illumination to ensure light coverage remains adequate at all times of the day, adjusting for weather and seasons. The key contribution of this paper is the complex integration of Big Data Analytics and IoT for addressing the large volume and velocity challenge of real-time data in the smart building domain.", "title": "" }, { "docid": "46de8aa53a304c3f66247fdccbe9b39f", "text": "The effect of pH and electrochemical potential on copper uptake, xanthate adsorption and the hydrophobicity of sphalerite were studied from flotation practice point of view using electrochemical and micro-flotation techniques. Voltammetric studies conducted using the combination of carbon matrix composite (CMC) electrode and surface conduction (SC) electrode show that the kinetics of activation increases with decreasing activating pH. Controlling potential contact angle measurements conducted on a copper-activated SC electrode in xanthate solution with different pHs show that, xanthate adsorption occurs at acidic and alkaline pHs and renders the mineral surface hydrophobic. At near neutral pH, although xanthate adsorbs on Cu:ZnS, the mineral surface is hydrophilic. Microflotation tests confirm this finding. Cleaning reagent was used to improve the flotation response of sphalerite at near neutral pH.", "title": "" }, { "docid": "ddd4ccf3d68d12036ebb9e5b89cb49b8", "text": "This paper presents a modified FastSLAM approach for the specific application of radar sensors using the Doppler information to increase the localization and map accuracy. The developed approach is based on the FastSLAM 2.0 algorithm. It is shown how the FastSLAM 2.0 approach can be significantly improved by taking the Doppler information into account. Therefore, the modelled, so-called expected Doppler, and the measured Doppler are compared for every detection. Both, simulations and experiments on real world data show the increase in accuracy of the modified FastSLAM approach by incorporating the Doppler measurements of automotive radar sensors. The proposed algorithm is compared to the state-of-the-art FastSLAM 2.0 algorithm and the vehicle odometry, whereas profiles of an Automotive Dynamic Motion Analyzer serve as the reference.", "title": "" }, { "docid": "1e8caa9f0a189bafebd65df092f918bc", "text": "For several decades, the role of hormone-replacement therapy (HRT) has been debated. Early observational data on HRT showed many benefits, including a reduction in coronary heart disease (CHD) and mortality. More recently, randomized trials, including the Women's Health Initiative (WHI), studying mostly women many years after the the onset of menopause, showed no such benefit and, indeed, an increased risk of CHD and breast cancer, which led to an abrupt decrease in the use of HRT. Subsequent reanalyzes of data from the WHI with age stratification, newer randomized and observational data and several meta-analyses now consistently show reductions in CHD and mortality when HRT is initiated soon after menopause. HRT also significantly decreases the incidence of various symptoms of menopause and the risk of osteoporotic fractures, and improves quality of life. In younger healthy women (aged 50–60 years), the risk–benefit balance is positive for using HRT, with risks considered rare. As no validated primary prevention strategies are available for younger women (<60 years of age), other than lifestyle management, some consideration might be given to HRT as a prevention strategy as treatment can reduce CHD and all-cause mortality. Although HRT should be primarily oestrogen-based, no particular HRT regimen can be advocated.", "title": "" }, { "docid": "502a948fbf73036a4a1546cdd4a04833", "text": "The literature review is an established research genre in many academic disciplines, including the IS discipline. Although many scholars agree that systematic literature reviews should be rigorous, few instructional texts for compiling a solid literature review, at least with regard to the IS discipline, exist. In response to this shortage, in this tutorial, I provide practical guidance for both students and researchers in the IS community who want to methodologically conduct qualitative literature reviews. The tutorial differs from other instructional texts in two regards. First, in contrast to most textbooks, I cover not only searching and synthesizing the literature but also the challenging tasks of framing the literature review, interpreting research findings, and proposing research paths. Second, I draw on other texts that provide guidelines for writing literature reviews in the IS discipline but use many examples of published literature reviews. I use an integrated example of a literature review, which guides the reader through the overall process of compiling a literature review.", "title": "" }, { "docid": "2d0c16376e71989031b99f3e5d79025c", "text": "In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity. The core idea is to equip each original convolutional layer with another low-cost collaborative layer (LCCL), and the element-wise multiplication of the ReLU outputs of these two parallel layers produces the layer-wise output. The combined layer is potentially more discriminative than the original convolutional layer, and its inference is faster for two reasons: 1) the zero cells of the LCCL feature maps will remain zero after element-wise multiplication, and thus it is safe to skip the calculation of the corresponding high-cost convolution in the original convolutional layer, 2) LCCL is very fast if it is implemented as a 1*1 convolution or only a single filter shared by all channels. Extensive experiments on the CIFAR-10, CIFAR-100 and ILSCRC-2012 benchmarks show that our proposed network structure can accelerate the inference process by 32% on average with negligible performance drop.", "title": "" }, { "docid": "408f58b7dd6cb1e6be9060f112773888", "text": "Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly backpropagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios.", "title": "" }, { "docid": "a53065d1cfb1fe898182d540d65d394b", "text": "This paper presents a novel approach for detecting affine invariant interest points. Our method can deal with significant affine transformations including large scale changes. Such transformations introduce significant changes in the point location as well as in the scale and the shape of the neighbourhood of an interest point. Our approach allows to solve for these problems simultaneously. It is based on three key ideas : 1) The second moment matrix computed in a point can be used to normalize a region in an affine invariant way (skew and stretch). 2) The scale of the local structure is indicated by local extrema of normalized derivatives over scale. 3) An affine-adapted Harris detector determines the location of interest points. A multi-scale version of this detector is used for initialization. An iterative algorithm then modifies location, scale and neighbourhood of each point and converges to affine invariant points. For matching and recognition, the image is characterized by a set of affine invariant points ; the affine transformation associated with each point allows the computation of an affine invariant descriptor which is also invariant to affine illumination changes. A quantitative comparison of our detector with existing ones shows a significant improvement in the presence of large affine deformations. Experimental results for wide baseline matching show an excellent performance in the presence of large perspective transformations including significant scale changes. Results for recognition are very good for a database with more than 5000 images.", "title": "" }, { "docid": "7435d1591725bbcd86fe93c607d5683c", "text": "This study evaluated the role of breast magnetic resonance (MR) imaging in the selective study breast implant integrity. We retrospectively analysed the signs of breast implant rupture observed at breast MR examinations of 157 implants and determined the sensitivity and specificity of the technique in diagnosing implant rupture by comparing MR data with findings at surgical explantation. The linguine and the salad-oil signs were statistically the most significant signs for diagnosing intracapsular rupture; the presence of siliconomas/seromas outside the capsule and/or in the axillary lymph nodes calls for immediate explantation. In agreement with previous reports, we found a close correlation between imaging signs and findings at explantation. Breast MR imaging can be considered the gold standard in the study of breast implants. Scopo del nostro lavoro è stato quello di valutare il ruolo della risonanza magnetica (RM) mammaria nello studio selettivo dell’integrità degli impianti protesici. è stata eseguita una valutazione retrospettiva dei segni di rottura documentati all’esame RM effettuati su 157 protesi mammarie, al fine di stabilire la sensibilità e specificità nella diagnosi di rottura protesica, confrontando tali dati RM con i reperti riscontrati in sala operatoria dopo la rimozione della protesi stessa. Il linguine sign e il salad-oil sign sono risultati i segni statisticamente più significativi nella diagnosi di rottura protesica intracapsulare; la presenza di siliconomi/sieromi extracapsulari e/o nei linfonodi ascellari impone l’immediato intervento chirurgico di rimozione della protesi rotta. I dati ottenuti dimostrano, in accordo con la letteratura, una corrispondenza tra i segni dell’imaging e i reperti chirurgici, confermando il ruolo di gold standard della RM nello studio delle protesi mammarie.", "title": "" } ]
scidocsrr
b865905fd2e1ec70274a97c1f9722c99
On Efficiency and Scalability of Software-Defined Infrastructure for Adaptive Applications
[ { "docid": "5cc26542d0f4602b2b257e19443839b3", "text": "Accurate performance evaluation of cloud computing resources is a necessary prerequisite for ensuring that quality of service parameters remain within agreed limits. In this paper, we employ both the analytical and simulation modeling to addresses the complexity of cloud computing systems. Analytical model is comprised of distinct functional submodels, the results of which are combined in an iterative manner to obtain the solution with required accuracy. Our models incorporate the important features of cloud centers such as batch arrival of user requests, resource virtualization, and realistic servicing steps, to obtain important performance metrics such as task blocking probability and total waiting time incurred on user requests. Also, our results reveal important insights for capacity planning to control delay of servicing users requests.", "title": "" } ]
[ { "docid": "ef6adbe1c2a0863eb6447cebffaaf0fe", "text": "How best to evaluate a saliency model's ability to predict where humans look in images is an open research question. The choice of evaluation metric depends on how saliency is defined and how the ground truth is represented. Metrics differ in how they rank saliency models, and this results from how false positives and false negatives are treated, whether viewing biases are accounted for, whether spatial deviations are factored in, and how the saliency maps are pre-processed. In this paper, we provide an analysis of 8 different evaluation metrics and their properties. With the help of systematic experiments and visualizations of metric computations, we add interpretability to saliency scores and more transparency to the evaluation of saliency models. Building off the differences in metric properties and behaviors, we make recommendations for metric selections under specific assumptions and for specific applications.", "title": "" }, { "docid": "ad860674746dcf04156b3576174a9120", "text": "Predicting the popularity dynamics of Twitter hashtags has a broad spectrum of applications. Existing works have primarily focused on modeling the popularity of individual tweets rather than the underlying hashtags. As a result, they fail to consider several realistic factors contributing to hashtag popularity. In this paper, we propose Large Margin Point Process (LMPP), a probabilistic framework that integrates hashtag-tweet influence and hashtaghashtag competitions, the two factors which play important roles in hashtag propagation. Furthermore, while considering the hashtag competitions, LMPP looks into the variations of popularity rankings of the competing hashtags across time. Extensive experiments on seven real datasets demonstrate that LMPP outperforms existing popularity prediction approaches by a significant margin. Additionally, LMPP can accurately predict the relative rankings of competing hashtags, offering additional advantage over the state-of-the-art baselines.", "title": "" }, { "docid": "40c16b5db17fa31a1bdae7e66a297ea7", "text": "Code smells, i.e., symptoms of poor design and implementation choices applied by programmers during the development of a software project [2], represent an important factor contributing to technical debt [3]. The research community spent a lot of effort studying the extent to which code smells tend to remain in a software project for long periods of time [9], as well as their negative impact on non-functional properties of source code [4, 7]. As a consequence, several tools and techniques have been proposed to help developers in detecting code smells and to suggest refactoring opportunities (e.g., [5, 6, 8]).\n So far, almost all detectors identify code smells using structural properties of source code. However, recent studies have indicated that code smells detected by existing tools are generally ignored (and thus not refactored) by the developers [1]. A possible reason is that developers do not perceive the code smells identified by the tool as actual design problems or, if they do, they are not able to practically work on such code smells. In other words, there is misalignment between what is considered smelly by the tool and what is actually refactorable by developers.\n In a previous paper [6], we introduced a tool named TACO that uses textual analysis to detect code smells. The results indicated that textual and structural techniques are complementary: while some code smell instances in a software system can be correctly identified by both TACO and the alternative structural approaches, other instances can be only detected by one of the two [6].\n In this paper, we investigate whether code smells detected using textual information are as difficult to identify and refactor as structural smells or if they follow a different pattern during software evolution. We firstly performed a repository mining study considering 301 releases and 183,514 commits from 20 open source projects (i) to verify whether textually and structurally detected code smells are treated differently, and (ii) to analyze their likelihood of being resolved with regards to different types of code changes, e.g., refactoring operations. Since our quantitative study cannot explain relation and causation between code smell types and maintenance activities, we perform a qualitative study with 19 industrial developers and 5 software quality experts in order to understand (i) how code smells identified using different sources of information are perceived, and (ii) whether textually or structurally detected code smells are easier to refactor. In both studies, we focused on five code smell types, i.e., Blob, Feature Envy, Long Method, Misplaced Class, and Promiscuous Package.\n The results of our studies indicate that textually detected code smells are perceived as harmful as the structural ones, even though they do not exceed any typical software metrics' value (e.g., lines of code in a method). Moreover, design problems in source code affected by textual-based code smells are easier to identify and refactor. As a consequence, developers' activities tend to decrease the intensity of textual code smells, positively impacting their likelihood of being resolved. Vice versa, structural code smells typically increase in intensity over time, indicating that maintenance operations are not aimed at removing or limiting them. Indeed, while developers perceive source code affected by structural-based code smells as harmful, they face more problems in correctly identifying the actual design problems affecting these code components and/or the right refactoring operation to apply to remove them.", "title": "" }, { "docid": "0e1cc3ddf39c9fff13894cf1d924c8cc", "text": "This paper introduces NSGA-Net, an evolutionary approach for neural architecture search (NAS). NSGA-Net is designed with three goals in mind: (1) a NAS procedure for multiple, possibly conflicting, objectives, (2) efficient exploration and exploitation of the space of potential neural network architectures, and (3) output of a diverse set of network architectures spanning a trade-off frontier of the objectives in a single run. NSGA-Net is a population-based search algorithm that explores a space of potential neural network architectures in three steps, namely, a population initialization step that is based on prior-knowledge from hand-crafted architectures, an exploration step comprising crossover and mutation of architectures and finally an exploitation step that applies the entire history of evaluated neural architectures in the form of a Bayesian Network prior. Experimental results suggest that combining the objectives of minimizing both an error metric and computational complexity, as measured by FLOPS, allows NSGA-Net to find competitive neural architectures near the Pareto front of both objectives on two different tasks, object classification and object alignment. NSGA-Net obtains networks that achieve 3.72% (at 4.5 million FLOP) error on CIFAR-10 classification and 8.64% (at 26.6 million FLOP) error on the CMU-Car alignment task. Code available at: https://github.com/ianwhale/nsga-net.", "title": "" }, { "docid": "f7a42937973a45ed4fb5d23e3be316a9", "text": "Domain specific information retrieval process has been a prominent and ongoing research in the field of natural language processing. Many researchers have incorporated different techniques to overcome the technical and domain specificity and provide a mature model for various domains of interest. The main bottleneck in these studies is the heavy coupling of domain experts, that makes the entire process to be time consuming and cumbersome. In this study, we have developed three novel models which are compared against a golden standard generated via the on line repositories provided, specifically for the legal domain. The three different models incorporated vector space representations of the legal domain, where document vector generation was done in two different mechanisms and as an ensemble of the above two. This study contains the research being carried out in the process of representing legal case documents into different vector spaces, whilst incorporating semantic word measures and natural language processing techniques. The ensemble model built in this study, shows a significantly higher accuracy level, which indeed proves the need for incorporation of domain specific semantic similarity measures into the information retrieval process. This study also shows, the impact of varying distribution of the word similarity measures, against varying document vector dimensions, which can lead to improvements in the process of legal information retrieval. keywords: Document Embedding, Deep Learning, Information Retrieval", "title": "" }, { "docid": "fd2b1d2a4d44f0535ceb6602869ffe1c", "text": "A conventional FCM algorithm does not fully utilize the spatial information in the image. In this paper, we present a fuzzy c-means (FCM) algorithm that incorporates spatial information into the membership function for clustering. The spatial function is the summation of the membership function in the neighborhood of each pixel under consideration. The advantages of the new method are the following: (1) it yields regions more homogeneous than those of other methods, (2) it reduces the spurious blobs, (3) it removes noisy spots, and (4) it is less sensitive to noise than other techniques. This technique is a powerful method for noisy image segmentation and works for both single and multiple-feature data with spatial information.", "title": "" }, { "docid": "31346876446c21b92f088b852c0201b2", "text": "In this paper, the closed-form design method of an Nway dual-band Wilkinson hybrid power divider is proposed. This symmetric structure including N groups of two sections of transmission lines and two isolated resistors is described which can split a signal into N equiphase equiamplitude parts at two arbitrary frequencies (dual-band) simultaneously, where N can be odd or even. Based on the rigorous evenand odd-mode analysis, the closed-form design equations are derived. For verification, various numerical examples are designed, calculated and compared while two practical examples including two ways and three ways dual-band microstrip power dividers are fabricated and measured. It is very interesting that this generalized power divider with analytical design equations can be designed for wideband applications when the frequency-ratio is relatively small. In addition, it is found that the conventional N-way hybrid Wilkinson power divider for single-band applications is a special case (the frequency-ratio equals to 3) of this generalized power divider.", "title": "" }, { "docid": "26e90d8dca906c2e7dd023441ba4438a", "text": "In this paper, we show that the handedness of a planar chiral checkerboard-like metasurface can be dynamically switched by modulating the local sheet impedance of the metasurface structure. We propose a metasurface design to realize the handedness switching and theoretically analyze its electromagnetic characteristic based on Babinet’s principle. Numerical simulations of the proposed metasurface are performed to validate the theoretical analysis. It is demonstrated that the polarity of asymmetric transmission for circularly polarized waves, which is determined by the planar chirality of the metasurface, is inverted by switching the sheet impedance at the interconnection points of the checkerboard-like structure. The physical origin of the asymmetric transmission is also discussed in terms of the surface current and charge distributions on the metasurface.", "title": "" }, { "docid": "49db1291f3f52a09037d6cfd305e8b5f", "text": "This paper examines cognitive beliefs and affect influencing one’s intention to continue using (continuance) information systems (IS). Expectationconfirmation theory is adapted from the consumer behavior literature and integrated with theoretical and empirical findings from prior IS usage research to theorize a model of IS continuance. Five research hypotheses derived from this model are empirically validated using a field survey of online banking users. The results suggest that users’ continuance intention is determined by their satisfaction with IS use and perceived usefulness of continued IS use. User satisfaction, in turn, is influenced by their confirmation of expectation from prior IS use and perceived usefulness. Postacceptance perceived usefulness is influenced by Ron Weber was the accepting senior editor for this paper. users’ confirmation level. This study draws attention to the substantive differences between acceptance and continuance behaviors, theorizes and validates one of the earliest theoretical models of IS continuance, integrates confirmation and user satisfaction constructs within our current understanding of IS use, conceptualizes and creates an initial scale for measuring IS continuance, and offers an initial explanation for the acceptancediscontinuance anomaly.", "title": "" }, { "docid": "a74b091706f4aeb384d2bf3d477da67d", "text": "Amazon's Echo and its conversational agent Alexa open exciting opportunities for understanding how people perceive and interact with virtual agents. Drawing from user reviews of the Echo posted to Amazon.com, this case study explores the degree to which user reviews indicate personification of the device, sociability level of interactions, factors linked with personification, and influences on user satisfaction. Results indicate marked variance in how people refer to the device, with over half using the personified name Alexa but most referencing the device with object pronouns. Degree of device personification is linked with sociability of interactions: greater personification co-occurs with more social interactions with the Echo. Reviewers mentioning multiple member households are more likely to personify the device than reviewers mentioning living alone. Even after controlling for technical issues, personification predicts user satisfaction with the Echo.", "title": "" }, { "docid": "5f66a3faa36f273831b13b4345c2bf15", "text": "The structure of blood vessels in the sclerathe white part of the human eye, is unique for every individual, hence it is best suited for human identification. However, this is a challenging research because it has a high insult rate (the number of occasions the valid user is rejected). In this survey firstly a brief introduction is presented about the sclera based biometric authentication. In addition, a literature survey is presented. We have proposed simplified method for sclera segmentation, a new method for sclera pattern enhancement based on histogram equalization and line descriptor based feature extraction and pattern matching with the help of matching score between the two segment descriptors. We attempt to increase the awareness about this topic, as much of the research is not done in this area.", "title": "" }, { "docid": "c5cb0ae3102fcae584e666a1ba3e73ed", "text": "A new generation of computational cameras is emerging, spawned by the introduction of the Lytro light-field camera to the consumer market and recent accomplishments in the speed at which light can be captured. By exploiting the co-design of camera optics and computational processing, these cameras capture unprecedented details of the plenoptic function: a ray-based model for light that includes the color spectrum as well as spatial, temporal, and directional variation. Although digital light sensors have greatly evolved in the last years, the visual information captured by conventional cameras has remained almost unchanged since the invention of the daguerreotype. All standard CCD and CMOS sensors integrate over the dimensions of the plenoptic function as they convert photons into electrons. In the process, all visual information is irreversibly lost, except for a two-dimensional, spatially varying subset: the common photograph.\n This course reviews the plenoptic function and discusses approaches for optically encoding high-dimensional visual information that is then recovered computationally in post-processing. It begins with an overview of the plenoptic dimensions and shows how much of this visual information is irreversibly lost in conventional image acquisition. Then it discusses the state of the art in joint optical modulation and computation reconstruction for acquisition of high-dynamic-range imagery and spectral information. It unveils the secrets behind imaging techniques that have recently been featured in the news and outlines other aspects of light that are of interest for various applications before concluding with question, answers, and a short discussion.", "title": "" }, { "docid": "9570975ee04cd1fc689a00b4499c22fc", "text": "Big Data is a phrase used to mean a massive volume of both structured and unstructured data that is so large it is difficult to process using traditional database and software techniques. In most enterprise scenarios the volume of data is too big or it moves too fast or it exceeds current processing capacity. Cloud computing is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), [1][2] which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers [3] that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network. This paper discusses approaches and environments for carrying out analytics on Clouds for Big Data applications. It revolves around four important areas of analytics and Big Data, namely (i) data management and supporting architectures; (ii) model development and scoring; (iii) visualisation and user interaction; and (iv) business models. Through a detailed survey, we identify possible gaps in technology and provide recommendations for the research community on future directions on Cloud-supported Big Data computing and analytics solutions.", "title": "" }, { "docid": "26992fcd5b560f11eb388d27d51527e9", "text": "The concept of digital twin, a kind of virtual things with the precise states of the corresponding physical systems, is suggested by industrial domains to accurately estimate the status and predict the operation of machines. Digital twin can be used for development of critical systems, such as self-driving cars and auto-production factories. There, however, will be so different digital twins in terms of resolution, complexity, modelling languages and formats. It is required to cooperate heterogeneous digital twins in standardized ways. Since a centralized digital twin system uses too big resources and energies, it is preferable to make large-scale digital twin system geographically and logically distributed over the Internet. In addition, efficient interworking functions between digital twins and the physical systems are required also. In this paper, we propose a novel architecture of large-scale digital twin platform including distributed digital twin cooperation framework, flexible data-centric communication middleware, and the platform based digital twin application to develop a reliable advanced driver assistance system.", "title": "" }, { "docid": "34acaf35585fe19fea86f6f3c8aa8a0f", "text": "This paper is concerned with deep reinforcement learning (deep RL) in continuous state and action space. It proposes a new method that can drastically speed up RL training for problems that have the property of state-action permissibility (SAP). This property says that after an action at is performed in a state st and the agent reaches the new state st+1, the agent can decide whether the action at is permissible or not permissible in state st . An action is not permissible in a state if the action can never lead to an optimal solution and thus should not have been tried. We incorporate the proposed method into a state-of-the-art deep RL algorithm to guide its training and apply it to solve the lane keeping (steering control) problem in self-driving or autonomous driving. It is shown that the proposed method can help speedup RL training markedly for the lane keeping task as compared to the RL algorithm without exploiting the SAP-based guidance and other baselines that employ constrained action space exploration strategies.", "title": "" }, { "docid": "532d5655281bf409dd6a44c1f875cd88", "text": "BACKGROUND\nOlder adults are at increased risk of experiencing loneliness and depression, particularly as they move into different types of care communities. Information and communication technology (ICT) usage may help older adults to maintain contact with social ties. However, prior research is not consistent about whether ICT use increases or decreases isolation and loneliness among older adults.\n\n\nOBJECTIVE\nThe purpose of this study was to examine how Internet use affects perceived social isolation and loneliness of older adults in assisted and independent living communities. We also examined the perceptions of how Internet use affects communication and social interaction.\n\n\nMETHODS\nOne wave of data from an ongoing study of ICT usage among older adults in assisted and independent living communities in Alabama was used. Regression analysis was used to determine the relationship between frequency of going online and isolation and loneliness (n=205) and perceptions of the effects of Internet use on communication and social interaction (n=60).\n\n\nRESULTS\nAfter controlling for the number of friends and family, physical/emotional social limitations, age, and study arm, a 1-point increase in the frequency of going online was associated with a 0.147-point decrease in loneliness scores (P=.005). Going online was not associated with perceived social isolation (P=.14). Among the measures of perception of the social effects of the Internet, each 1-point increase in the frequency of going online was associated with an increase in agreement that using the Internet had: (1) made it easier to reach people (b=0.508, P<.001), (2) contributed to the ability to stay in touch (b=0.516, P<.001), (3) made it easier to meet new people (b=0.297, P=.01, (4) increased the quantity of communication with others (b=0.306, P=.01), (5) made the respondent feel less isolated (b=0.491, P<.001), (6) helped the respondent feel more connected to friends and family (b=0.392, P=.001), and (7) increased the quality of communication with others (b=0.289, P=.01).\n\n\nCONCLUSIONS\nUsing the Internet may be beneficial for decreasing loneliness and increasing social contact among older adults in assisted and independent living communities.", "title": "" }, { "docid": "2f23d51ffd54a6502eea07883709d016", "text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.", "title": "" }, { "docid": "ed05b17a9d8a3e330b098a7b0b0dcd34", "text": "Accurate prediction of fault prone modules (a module is equivalent to a C function or a C+ + method) in software development process enables effective detection and identification of defects. Such prediction models are especially beneficial for large-scale systems, where verification experts need to focus their attention and resources to problem areas in the system under development. This paper presents a novel methodology for predicting fault prone modules, based on random forests. Random forests are an extension of decision tree learning. Instead of generating one decision tree, this methodology generates hundreds or even thousands of trees using subsets of the training data. Classification decision is obtained by voting. We applied random forests in five case studies based on NASA data sets. The prediction accuracy of the proposed methodology is generally higher than that achieved by logistic regression, discriminant analysis and the algorithms in two machine learning software packages, WEKA [I. H. Witten et al. (1999)] and See5. The difference in the performance of the proposed methodology over other methods is statistically significant. Further, the classification accuracy of random forests is more significant over other methods in larger data sets.", "title": "" }, { "docid": "149fa8c20c5656373930474237337b21", "text": "OBJECTIVES: To compare the predictive value of pH, base deficit and lactate for the occurrence of moderate-to-severe hypoxic ischaemic encephalopathy (HIE) and systemic complications of asphyxia in term infants with intrapartum asphyxia.STUDY DESIGN: We retrospectively reviewed the records of 61 full-term neonates (≥37 weeks gestation) suspected of having suffered from a significant degree of intrapartum asphyxia from a period of January 1997 to December 2001.The clinical signs of HIE, if any, were categorized using Sarnat and Sarnat classification as mild (stage 1), moderate (stage 2) or severe (stage 3). Base deficit, pH and plasma lactate levels were measured from indwelling arterial catheters within 1 hour after birth and thereafter alongwith every blood gas measurement. The results were correlated with the subsequent presence or absence of moderate-to-severe HIE by computing receiver operating characteristic curves.RESULTS: The initial lactate levels were significantly higher (p=0.001) in neonates with moderate-to-severe HIE (mean±SD=11.09±4.6) as compared to those with mild or no HIE (mean±SD=7.1±4.7). Also, the lactate levels took longer to normalize in these babies. A plasma lactate concentration >7.5±mmol/l was associated with moderate-or-severe HIE with a sensitivity of 94% and specificity of 67%. The sensitivity and negative predictive value of lactate was greater than that of the pH or base deficit.CONCLUSIONS: The highest recorded lactate level in the first hour of life and serial measurements of lactate are important predictors of moderate-to-severe HIE.", "title": "" }, { "docid": "2d05142e12f63a354ec0c48436cd3697", "text": "Author Name Disambiguation Neil R. Smalheiser and Vetle I. Torvik", "title": "" } ]
scidocsrr
f3b2ad7432bf90aa5661f18771fff878
A study of link farm distribution and evolution using a time series of web snapshots
[ { "docid": "880b4ce4c8fd19191cb996aceabdf5a7", "text": "The study of the web as a graph is not only fascinating in its own right, but also yields valuable insight into web algorithms for crawling, searching and community discovery, and the sociological phenomena which characterize its evolution. We report on experiments on local and global properties of the web graph using two Altavista crawls each with over 200 million pages and 1.5 billion links. Our study indicates that the macroscopic structure of the web is considerably more intricate than suggested by earlier experiments on a smaller scale.", "title": "" } ]
[ { "docid": "db31a8887bfc1b24c2d2c2177d4ef519", "text": "The equilibrium microstructure of a fluid may only be described exactly in terms of a complete set of n-body atomic distribution functions, where n is 1, 2, 3 , . . . , N, and N is the total number of particles in the system. The higher order functions, i. e. n > 2, are complex and practically inaccessible but con­ siderable qualitative information can already be derived from studies of the mean radial occupation function n(r) defined as the average number of atoms in a sphere of radius r centred on a particular atom. The function for a perfect gas of non-inter­ acting particles is", "title": "" }, { "docid": "40d9fb6ce396d3629f0406661b9bbd56", "text": "Internet traffic classification has been the subject of intensive study since the birth of the Internet itself. Indeed, the evolution of approaches for traffic classification can be associated with the evolution of the Internet itself and with the adoption of new services and the emergence of novel applications and communication paradigms. Throughout the years many approaches have been proposed for addressing technical issues imposed by such novel services. Deep-Packet Inspection (DPI) has been a very important research topic within the traffic classification field and its concept consists of the analysis of the contents of the captured packets in order to accurately and timely discriminate the traffic generated by different Internet protocols. DPI was devised as a means to address several issues associated with port-based and statistical-based classification approaches in order to achieve an accurate and timely traffic classification. Many research works proposed different DPI schemes while many open-source modules have also become available for deployment. Surveys become then valuable tools for performing an overall analysis, study and comparison between the several proposed methods. In this paper we present a survey in which a complete and thorough analysis of the most important open-source DPI modules is performed. Such analysis comprises an evaluation of the classification accuracy, through a common set of traffic traces with ground truth, and of the computational requirements. In this manner, this survey presents a technical assessment of DPI modules and the analysis of the obtained evaluation results enable the proposal of general guidelines for the design and implementation of more adequate DPI modules.", "title": "" }, { "docid": "9d241d577a06f7590af79c2444c91c9d", "text": "UNLABELLED\nResearch over the last few years has revealed significant haplotype structure in the human genome. The characterization of these patterns, particularly in the context of medical genetic association studies, is becoming a routine research activity. Haploview is a software package that provides computation of linkage disequilibrium statistics and population haplotype patterns from primary genotype data in a visually appealing and interactive interface.\n\n\nAVAILABILITY\nhttp://www.broad.mit.edu/mpg/haploview/\n\n\nCONTACT\njcbarret@broad.mit.edu", "title": "" }, { "docid": "e3f847a7c815772b909fcccbafed4af3", "text": "The contribution of tumorigenic stem cells to haematopoietic cancers has been established for some time, and cells possessing stem-cell properties have been described in several solid tumours. Although chemotherapy kills most cells in a tumour, it is believed to leave tumour stem cells behind, which might be an important mechanism of resistance. For example, the ATP-binding cassette (ABC) drug transporters have been shown to protect cancer stem cells from chemotherapeutic agents. Gaining a better insight into the mechanisms of stem-cell resistance to chemotherapy might therefore lead to new therapeutic targets and better anticancer strategies.", "title": "" }, { "docid": "7de99443d9d56dacb41d467609ef45cd", "text": "Aircraft detection from very high resolution (VHR) remote sensing images has been drawing increasing interest in recent years due to the successful civil and military applications. However, several challenges still exist: 1) extracting the high-level features and the hierarchical feature representations of the objects is difficult; 2) manual annotation of the objects in large image sets is generally expensive and sometimes unreliable; and 3) locating objects within such a large image is difficult and time consuming. In this paper, we propose a weakly supervised learning framework based on coupled convolutional neural networks (CNNs) for aircraft detection, which can simultaneously solve these problems. We first develop a CNN-based method to extract the high-level features and the hierarchical feature representations of the objects. We then employ an iterative weakly supervised learning framework to automatically mine and augment the training data set from the original image. We propose a coupled CNN method, which combines a candidate region proposal network and a localization network to extract the proposals and simultaneously locate the aircraft, which is more efficient and accurate, even in large-scale VHR images. In the experiments, the proposed method was applied to three challenging high-resolution data sets: the Sydney International Airport data set, the Tokyo Haneda Airport data set, and the Berlin Tegel Airport data set. The extensive experimental results confirm that the proposed method can achieve a higher detection accuracy than the other methods.", "title": "" }, { "docid": "fc94c6fb38198c726ab3b417c3fe9b44", "text": "Tremor is a rhythmical and involuntary oscillatory movement of a body part and it is one of the most common movement disorders. Orthotic devices have been under investigation as a noninvasive tremor suppression alternative to medication or surgery. The challenge in musculoskeletal tremor suppression is estimating and attenuating the tremor motion without impeding the patient's intentional motion. In this research a robust tremor suppression algorithm was derived for patients with pathological tremor in the upper limbs. First the motion in the tremor frequency range is estimated using a high-pass filter. Then, by applying the backstepping method the appropriate amount of torque is calculated to drive the output of the estimator toward zero. This is equivalent to an estimation of the tremor torque. It is shown that the arm/orthotic device control system is stable and the algorithm is robust despite inherent uncertainties in the open-loop human arm joint model. A human arm joint simulator, capable of emulating tremorous motion of a human arm joint was used to evaluate the proposed suppression algorithm experimentally for two types of tremor, Parkinson and essential. Experimental results show 30-42 dB (97.5-99.2%) suppression of tremor with minimal effect on the intentional motion.", "title": "" }, { "docid": "dd4a95cea1cdc0351276368d5228bb6e", "text": "Shape reconstruction from raw point sets is a hot research topic. Point sets are increasingly available as primary input source, since low-cost acquisition methods are largely accessible nowadays, and these sets are more noisy than used to be. Standard reconstruction methods rely on normals or signed distance functions, and thus many methods aim at estimating these features. Human vision can however easily discern between the inside and the outside of a dense cloud even without the support of fancy measures. We propose, here, a perceptual method for estimating an indicator function for the shape, inspired from image-based methods. The resulting function nicely approximates the shape, is robust to noise, and can be used for direct isosurface extraction or as an input for other accurate reconstruction methods.", "title": "" }, { "docid": "53e7c26ce6abc85d721b2f1661d1c3c0", "text": "For the detail mapping there are multiple methods that can be used. In Battlefield 2, a 256 m patch of the terrain could have up to six different tiling detail maps that were blended together using one or two three-component unique detail mask textures (Figure 4) that controlled the visibility of the individual detail maps. Artists would paint or generate the detail masks just as for the color map.", "title": "" }, { "docid": "dcc7f48a828556808dc435deda5c1281", "text": "Object detection and segmentation represents the basis for many tasks in computer and machine vision. In biometric recognition systems the detection of the region-of-interest (ROI) is one of the most crucial steps in the overall processing pipeline, significantly impacting the performance of the entire recognition system. Existing approaches to ear detection, for example, are commonly susceptible to the presence of severe occlusions, ear accessories or variable illumination conditions and often deteriorate in their performance if applied on ear images captured in unconstrained settings. To address these shortcomings, we present in this paper a novel ear detection technique based on convolutional encoder-decoder networks (CEDs). For our technique, we formulate the problem of ear detection as a two-class segmentation problem and train a convolutional encoder-decoder network based on the SegNet architecture to distinguish between image-pixels belonging to either the ear or the non-ear class. The output of the network is then post-processed to further refine the segmentation result and return the final locations of the ears in the input image. Different from competing techniques from the literature, our approach does not simply return a bounding box around the detected ear, but provides detailed, pixel-wise information about the location of the ears in the image. Our experiments on a dataset gathered from the web (a.k.a. in the wild) show that the proposed technique ensures good detection results in the presence of various covariate factors and significantly outperforms the existing state-of-the-art.", "title": "" }, { "docid": "3cbc035529138be1d6f8f66a637584dd", "text": "Regression models such as the Cox proportional hazards model have had increasing use in modelling and estimating the prognosis of patients with a variety of diseases. Many applications involve a large number of variables to be modelled using a relatively small patient sample. Problems of overfitting and of identifying important covariates are exacerbated in analysing prognosis because the accuracy of a model is more a function of the number of events than of the sample size. We used a general index of predictive discrimination to measure the ability of a model developed on training samples of varying sizes to predict survival in an independent test sample of patients suspected of having coronary artery disease. We compared three methods of model fitting: (1) standard 'step-up' variable selection, (2) incomplete principal components regression, and (3) Cox model regression after developing clinical indices from variable clusters. We found regression using principal components to offer superior predictions in the test sample, whereas regression using indices offers easily interpretable models nearly as good as the principal components models. Standard variable selection has a number of deficiencies.", "title": "" }, { "docid": "725e92f13cc7c03b890b5d2e7380b321", "text": "Developing algorithms for solving high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notoriously difficult problem known as “the curse of dimensionality”. This paper presents a deep learning-based approach that can handle general high-dimensional parabolic PDEs. To this end, the PDEs are reformulated as a control theory problem and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function. Numerical results on examples including the nonlinear Black-Scholes equation, the Hamilton-Jacobi-Bellman equation, and the Allen-Cahn equation suggest that the proposed algorithm is quite effective in high dimensions, in terms of both accuracy and speed. This opens up new possibilities in economics, finance, operational research, and physics, by considering all participating agents, assets, resources, or particles together at the same time, instead of making ad hoc assumptions on their inter-relationships.", "title": "" }, { "docid": "264d5db966f9cbed6b128087c7e3761e", "text": "We study auction mechanisms for sharing spectrum among a group of users, subject to a constraint on the interference temperature at a measurement point. The users access the channel using spread spectrum signaling and so interfere with each other. Each user receives a utility that is a function of the received signal-to-interference plus noise ratio. We propose two auction mechanisms for allocating the received power. The first is an auction in which users are charged for received SINR, which, when combined with logarithmic utilities, leads to a weighted max-min fair SINR allocation. The second is an auction in which users are charged for power, which maximizes the total utility when the bandwidth is large enough and the receivers are co-located. Both auction mechanisms are shown to be socially optimal for a limiting “large system” with co-located receivers, where bandwidth, power and the number of users are increased in fixed proportion. We also formulate an iterative and distributed bid updating algorithm, and specify conditions under which this algorithm converges globally to the Nash equilibrium of the auction.", "title": "" }, { "docid": "dc66c67cb33e405a548b0ec665df547f", "text": "This paper presents a deep learning method for faster magnetic resonance imaging (MRI) by reducing k-space data with sub-Nyquist sampling strategies and provides a rationale for why the proposed approach works well. Uniform subsampling is used in the time-consuming phase-encoding direction to capture high-resolution image information, while permitting the image-folding problem dictated by the Poisson summation formula. To deal with the localization uncertainty due to image folding, a small number of low-frequency k-space data are added. Training the deep learning net involves input and output images that are pairs of the Fourier transforms of the subsampled and fully sampled k-space data. Our experiments show the remarkable performance of the proposed method; only 29[Formula: see text] of the k-space data can generate images of high quality as effectively as standard MRI reconstruction with the fully sampled data.", "title": "" }, { "docid": "852391aa93e00f9aebdbc65c2e030abf", "text": "The iSTAR Micro Air Vehicle (MAV) is a unique 9-inch diameter ducted air vehicle weighing approximately 4 lb. The configuration consists of a ducted fan with control vanes at the duct exit plane. This VTOL aircraft not only hovers, but it can also fly at high forward speed by pitching over to a near horizontal attitude. The duct both increases propulsion efficiency and produces lift in horizontal flight, similar to a conventional planar wing. The vehicle is controlled using a rate based control system with piezo-electric gyroscopes. The Flight Control Computer (FCC) processes the pilot’s commands and the rate data from the gyroscopes to stabilize and control the vehicle. First flight of the iSTAR MAV was successfully accomplished in October 2000. Flight at high pitch angles and high speed took place in November 2000. This paper describes the vehicle, control system, and ground and flight-test results . Presented at the American Helicopter Society 57 Annual forum, Washington, DC, May 9-11, 2001. Copyright  2001 by the American Helicopter Society International, Inc. All rights reserved. Introduction The Micro Craft Inc. iSTAR is a Vertical Take-Off and Landing air vehicle (Figure 1) utilizing ducted fan technology to hover and fly at high forward speed. The duct both increases the propulsion efficiency and provides direct lift in forward flight similar to a conventional planar wing. However, there are many other benefits inherent in the iSTAR design. In terms of safety, the duct protects personnel from exposure to the propeller. The vehicle also has a very small footprint, essentially a circle equal to the diameter of the duct. This is beneficial for stowing, transporting, and in operations where space is critical, such as on board ships. The simplicity of the design is another major benefit. The absence of complex mechanical systems inherent in other VTOL designs (e.g., gearboxes, articulating blades, and counter-rotating propellers) benefits both reliability and cost. Figure 1: iSTAR Micro Air Vehicle The Micro Craft iSTAR VTOL aircraft is able to both hover and fly at high speed by pitching over towards a horizontal attitude (Figure 2). Although many aircraft in history have utilized ducted fans, most of these did not attempt to transition to high-speed forward flight. One of the few aircraft that did successfully transition was the Bell X-22 (Reference 1), first flown in 1965. The X-22, consisted of a fuselage and four ducted fans that rotated relative to the fuselage to transition the vehicle forward. The X-22 differed from the iSTAR in that its fuselage remained nearly level in forward flight, and the ducts rotated relative to the fuselage. Also planar tandem wings, not the ducts themselves, generated a large portion of the lift in forward flight. 1 Micro Craft Inc. is a division of Allied Aerospace Industry Incorporated (AAII) One of the first aircraft using an annular wing for direct lift was the French Coleoptère (Reference 1) built in the late 1950s. This vehicle successfully completed transition from hovering flight using an annular wing, however a ducted propeller was not used. Instead, a single jet engine was mounted inside the center-body for propulsion. Control was achieved by deflecting vanes inside the jet exhaust, with small external fins attached to the duct, and also with deployable strakes on the nose. Figure 2: Hover & flight at forward speed Less well-known are the General Dynamics ducted-fan Unmanned Air Vehicles, which were developed and flown starting in 1960 with the PEEK (Reference 1) aircraft. These vehicles, a precursor to the Micro Craft iSTAR, demonstrated stable hover and low speed flight in free-flight tests, and transition to forward flight in tethered ground tests. In 1999, Micro Craft acquired the patent, improved and miniaturized the design, and manufactured two 9-inch diameter flight test vehicles under DARPA funding (Reference 1). Working in conjunction with BAE systems (formerly Lockheed Sanders) and the Army/NASA Rotorcraft Division, these vehicles have recently completed a proof-ofconcept flight test program and have been demonstrated to DARPA and the US Army. Military applications of the iSTAR include intelligence, surveillance, target acquisition, and reconnaissance. Commercial applications include border patrol, bridge inspection, and police surveillance. Vehicle Description The iSTAR is composed of four major assemblies as shown in Figure 3: (1) the upper center-body, (2) the lower center body, (3) the duct, and (4) the landing ring. The majority of the vehicle’s structure is composed of Kevlar composite material resulting in a very strong and lightweight structure. Kevlar also lacks the brittleness common to other composite materials. Components that are not composite include the engine bulkhead (aluminum) and the landing ring (steel wire). The four major assemblies are described below. The upper center-body (UCB) is cylindrical in shape and contains the engine, engine controls, propeller, and payload. Three sets of hollow struts support the UCB and pass fuel and wiring to the duct. The propulsion Hover Low Speed High Speed system is a commercial-off-the-shelf (COTS) OS-32 SX single cylinder engine. This engine develops 1.2 hp and weighs approximately 250 grams (~0.5 lb.). Fuel consists of a mixture of alcohol, nitro-methane, and oil. The fixed-pitch propeller is attached directly to the engine shaft (without a gearbox). Starting the engine is accomplished by inserting a cylindrical shaft with an attached gear into the upper center-body and meshing it with a gear fit onto the propeller shaft (see Figure 4). The shaft is rotated using an off-board electric starter (Micro Craft is also investigating on-board starting systems). Figure 3: iSTAR configuration A micro video camera is mounted inside the nose cone, which is easily removable to accommodate modular payloads. The entire UCB can be removed in less than five minutes by removing eight screws securing the struts, and then disconnecting one fuel line and one electrical connector. Figure 4: Engine starting The lower center-body (LCB) is cylindrical in shape and is supported by eight stators. The sensor board is housed in the LCB, and contains three piezo-electric gyroscopes, three accelerometers, a voltage regulator, and amplifiers. The sensor signals are routed to the processor board in the duct via wires integrated into the stators. The duct is nine inches in diameter and contains a significant amount of volume for packaging. The fuel tank, flight control Computer (FCC), voltage regulator, batteries, servos, and receiver are all housed inside the duct. Fuel is contained in the leading edge of the duct. This tank is non-structural, and easily removable. It is attached to the duct with tape. Internal to the duct are eight fixed stators. The angle of the stators is set so that they produce an aerodynamic rolling moment countering the torque of the engine. Control vanes are attached to the trailing edge of the stators, providing roll, yaw, and pitch control. Four servos mounted inside the duct actuate the control vanes. Many different landing systems have been studied in the past. These trade studies have identified the landing ring as superior overall to other systems. The landing ring stabilizes the vehicle in close proximity to the ground by providing a restoring moment in dynamic situations. For example, if the vehicle were translating slowly and contacted the ground, the ring would pitch the vehicle upright. The ring also reduces blockage of the duct during landing and take-off by raising the vehicle above the ground. Blocking the duct can lead to reduced thrust and control power. Landing feet have also been considered because of their reduced weight. However, landing ‘feet’ lack the self-stabilizing characteristics of the ring in dynamic situations and tend to ‘catch’ on uneven surfaces. Electronics and Control System The Flight Control Computer (FCC) is housed in the duct (Figure 5). The computer processes the sensor output and pilot commands and generates pulse width modulated (PWM) signals to drive the servos. Pilot commands are generated using two conventional joysticks. The left joystick controls throttle position and heading. The right joystick controls pitch and yaw rate. The aircraft axis system is defined such that the longitudinal axis is coaxial with the engine shaft. Therefore, in hover the pitch attitude is 90 degrees and rolling the aircraft produces a heading change. Dedicated servos are used for pitch and yaw control. However, all control vanes are used for roll control (four quadrant roll control). The FCC provides the appropriate mixing for each servo. In each axis, the control system architecture consists of a conventional Proportional-Integral-Derivative (PID) controller with single-input and single-output. Initially, an attitude-based control system was desired, however Upper Center-body Fuel Tank Fixed Stator Control Vane Actuator Landing Ring Lower Center-body Duct Engine and Controls Prop/Fan Support struts due to the lack of acceleration information and the high gyroscope drift rates, accurate attitudes could not be calculated. For this reason, a rate system was ultimately implemented. Three Murata micro piezo-electric gyroscopes provide rates about all three axes. These gyroscopes are approximately 0.6”x0.3”x0.15” in size and weigh 1 gram each (Figure 6). Figure 5: Flight Control Computer Four COTS servos are located in the duct to actuate the control surfaces. Each servo weighs 28 grams and is 1.3”x1.3”x0.6” in size. Relative to typical UAV servos, they can generate high rates, but have low bandwidth. Bandwidth is defined by how high a frequency the servo can accurately follow an input signal. For all servos, the output lags behind the input and the signal degrades in magnitude as the frequency increases. At low frequency, the iSTAR MAV servo output signal lags by approximately 30°,", "title": "" }, { "docid": "451110458791809898c854991a073119", "text": "This paper considers the problem of face detection in first attempt using haar cascade classifier from images containing simple and complex backgrounds. It is one of the best detector in terms of reliability and speed. Experiments were carried out on standard database i.e. Indian face database (IFD) and Caltech database. All images are frontal face images because side face views are harder to detect with this technique. Opencv 2.4.2 is used to implement the haar cascade classifier. We achieved 100% face detection rate on Indian database containing simple background and 93.24% detection rate on Caltech database containing complex background. Haar cascade classifier provides high accuracy even the images are highly affected by the illumination. The haar cascade classifier has shown superior performance with simple background images.", "title": "" }, { "docid": "49a13503920438f546822b344ad68d58", "text": "OBJECTIVES\nThe determination of cholinesterase activity has been commonly applied in the biomonitoring of exposure to organophosphates and carbamates and in the diagnosis of poisoning with anticholinesterase compounds. One of the groups who are at risk of pesticide intoxication are the workers engaged in the production of these chemicals.\n\n\nAIMS\nThe aim of this study was to assess the effect of pesticides on erythrocyte and serum cholinesterase activity in workers occupationally exposed to these chemicals.\n\n\nMETHODS\nThe subjects were 63 workers at a pesticide plant. Blood samples were collected before they were employed (phase I) and after 3 months of working in the plant (phase II). Cholinesterase level in erythrocytes (EChE) was determined using the modified Ellman method, and serum cholinesterase (SChE) by butyrylthiocholine substrate assay.\n\n\nRESULTS\nThe mean EChE levels were 48+/-11 IU/g Hb in phase I and 37+/-17 IU/g Hb in phase II (paired t-test, mean=-29; 95% CI=-43-14), p<0.001). The mean SChE level was 9569+/-2496 IU/l in phase I, and 7970+/-2067 IU/l in phase II (paired t-test, mean=1599; 95% CI=1140-2058, p<0.001). There was a significant increase in ALT level (p < 0.001) and a decrease in serum albumin level (p<0.001).\n\n\nCONCLUSION\nIn view of the significant decrease in EChE and SChE levels among pesticide workers, it seems that routine assessment of cholinesterase level in workers employed in such occupations and people handling pesticides should be made obligatory.", "title": "" }, { "docid": "7b215780b323aa3672d34ca243b1cf46", "text": "In this paper, we study the problem of semantic annotation on 3D models that are represented as shape graphs. A functional view is taken to represent localized information on graphs, so that annotations such as part segment or keypoint are nothing but 0-1 indicator vertex functions. Compared with images that are 2D grids, shape graphs are irregular and non-isomorphic data structures. To enable the prediction of vertex functions on them by convolutional neural networks, we resort to spectral CNN method that enables weight sharing by parametrizing kernels in the spectral domain spanned by graph Laplacian eigenbases. Under this setting, our network, named SyncSpecCNN, strives to overcome two key challenges: how to share coefficients and conduct multi-scale analysis in different parts of the graph for a single shape, and how to share information across related but different shapes that may be represented by very different graphs. Towards these goals, we introduce a spectral parametrization of dilated convolutional kernels and a spectral transformer network. Experimentally we tested SyncSpecCNN on various tasks, including 3D shape part segmentation and keypoint prediction. State-of-the-art performance has been achieved on all benchmark datasets.", "title": "" }, { "docid": "a7f535275801ee4ed9f83369f416c408", "text": "A recent development in text compression is a “block sorting” algorithm which permutes the input text according to a special sort procedure and then processes the permuted text with Move-to-Front and a final statistical compressor. The technique combines good speed with excellent compression performance. This paper investigates the fundamental operation of the algorithm and presents some improvements based on that analysis. Although block sorting is clearly related to previous compression techniques, it appears that it is best described by techniques derived from work by Shannon in 1951 on the prediction and entropy of English text. A simple model is developed which relates the compression to the proportion of zeros after the MTF stage. Short Title Block Sorting Text Compression Author Peter M. Fenwick Affiliation Department of Computer Science The University of Auckland Private Bag 92019 Auckland, New Zealand. Postal Address Dr P.M. Fenwick Dept of Computer Science The University of Auckland Private Bag 92019 Auckland New Zealand. E-mail p_fenwick@cs.auckland.ac.nz Telephone + 64 9 373 7599 ext 8298", "title": "" }, { "docid": "65d00120929fe519a64ad50392a23924", "text": "A compact printed UWB MIMO antenna with a 5.8 GHz band-notch is presented. The two antennas are located on the two opposite sides of a Printed-Circuits-Board (PCB), separated by a spacing of 13.2 mm and a small isolated element, which provides a good isolation. The antenna structure adopts coupled and parasitic modes to form multi-modal resonance that results in the desired ultra-wideband operation. There is a parasitic slit embedded on the main radiator and an isolated element employed between the two antennas. An excellent desired band-notched UWB characteristic was obtained by care design of the parasitic slit. The overall size of the proposed antenna is mere 40.2×54×0.8 mm; the radiation patterns of the two antennas cover the complementary space of 180o; the antenna yields peak gains varied from 5 to 8 dBi, and antenna radiation efficiency exceeding about 70~90 % over the operation band. The antenna port Envelope Correlation Coefficient (ECC) was less than about 0.07. Moreover, the antenna is easy to fabricate and suitable for any wireless modules applications at the UWB band.", "title": "" }, { "docid": "d846d16aac9067c82dc85b9bc17756e0", "text": "We present a novel solution to improve the performance of Chinese word segmentation (CWS) using a synthetic word parser. The parser analyses the internal structure of words, and attempts to convert out-of-vocabulary words (OOVs) into in-vocabulary fine-grained sub-words. We propose a pipeline CWS system that first predicts this fine-grained segmentation, then chunks the output to reconstruct the original word segmentation standard. We achieve competitive results on the PKU and MSR datasets, with substantial improvements in OOV recall.", "title": "" } ]
scidocsrr
44ed214d3eb52e6b51e2b434d9f918c3
A segmented topic model based on the two-parameter Poisson-Dirichlet process
[ { "docid": "53be2c41da023d9e2380e362bfbe7cce", "text": "A rich and  exible class of random probability measures, which we call stick-breaking priors, can be constructed using a sequence of independent beta random variables. Examples of random measures that have this characterization include the Dirichlet process, its two-parameter extension, the two-parameter Poisson–Dirichlet process, Ž nite dimensional Dirichlet priors, and beta two-parameter processes. The rich nature of stick-breaking priors offers Bayesians a useful class of priors for nonparametri c problems, while the similar construction used in each prior can be exploited to develop a general computational procedure for Ž tting them. In this article we present two general types of Gibbs samplers that can be used to Ž t posteriors of Bayesian hierarchical models based on stick-breaking priors. The Ž rst type of Gibbs sampler, referred to as a Pólya urn Gibbs sampler, is a generalized version of a widely used Gibbs sampling method currently employed for Dirichlet process computing. This method applies to stick-breaking priors with a known Pólya urn characterization, that is, priors with an explicit and simple prediction rule. Our second method, the blocked Gibbs sampler, is based on an entirely different approach that works by directly sampling values from the posterior of the random measure. The blocked Gibbs sampler can be viewed as a more general approach because it works without requiring an explicit prediction rule. We Ž nd that the blocked Gibbs avoids some of the limitations seen with the Pólya urn approach and should be simpler for nonexperts to use.", "title": "" } ]
[ { "docid": "f1052f4704b5ec55e2a131dc2f2d6afc", "text": "A simple control for a permanent motor drive is described which provides a wide speed range without the use of a shaft sensor. Two line-to-line voltages and two stator currents are sensed and processed in analog form to produce the stator flux linkage space vector. The angle of this vector is then used in a microcontroller to produce the appropriate stator current command signals for the hysteresis current controller of the inverter so that near unity power factor can be achieved over a wide range of torque and speed. A speed signal is derived from the rate of change of angle of the flux linkage. A drift compensation program is proposed to avoid calculation errors in the determination of angle position and speed. The control system has been implemented on a 5 kW motor using Nd-Fe-B magnets. The closed loop speed control has been shown to be effective down to a frequency of less than 1 Hz, thus providing a wide range of speed control. An open loop starting program is used to accelerate the motor up to this limit frequency with minimum speed oscillation.<<ETX>>", "title": "" }, { "docid": "fb204d2f9965d17ed87c8fe8d1f22cdd", "text": "Are metaphors departures from a norm of literalness? According to classical rhetoric and most later theories, including Gricean pragmatics, they are. No, metaphors are wholly normal, say the Romantic critics of classical rhetoric and a variety of modern scholars ranging from hard-nosed cognitive scientists to postmodern critical theorists. On the metaphor-as-normal side, there is a broad contrast between those, like the cognitive linguists Lakoff, Talmy or Fauconnier, who see metaphor as pervasive in language because it is constitutive of human thought, and those, like the psycholinguists Glucksberg or Kintsch, or relevance theorists, who describe metaphor as emerging in the process of verbal communication. 1 While metaphor cannot be both wholly normal and a departure from normal language use, there might be distinct, though related, metaphorical phenomena at the level of thought, on the one hand, and verbal communication, on the other. This possibility is being explored (for instance) in the work of Raymond Gibbs. 2 In this chapter, we focus on the relevance-theoretic approach to linguistic metaphors.", "title": "" }, { "docid": "ab0154cea907abbb26d074496c856bd7", "text": "So far, empirically grounded studies, which compare the phenomena of e-commerce and e-government, have been in short supply. However, such studies it has been argued would most likely deepen the understanding of the sector-specific similarities and differences leading to potential cross-fertilization between the two sectors as well as to the establishment of performance measures and success criteria. This paper reports on the findings of an empirical research pilot, which is the first in a series of planned exploratory and theory-testing studies on the subject", "title": "" }, { "docid": "d76246dfee7e2f3813e025ac34ffc354", "text": "Web usage mining is application of data mining techniques to discover usage patterns from web data, in order to better serve the needs of web based applications. The user access log files present very significant information about a web server. This paper is concerned with the in-depth analysis of Web Log Data of NASA website to find information about a web site, top errors, potential visitors of the site etc. which help system administrator and Web designer to improve their system by determining occurred systems errors, corrupted and broken links by using web using mining. The obtained results of the study will be used in the further development of the web site in order to increase its effectiveness.", "title": "" }, { "docid": "24c1b31bac3688c901c9b56ef9a331da", "text": "Advanced Persistent Threats (APTs) are a new breed of internet based smart threats, which can go undetected with the existing state of-the-art internet traffic monitoring and protection systems. With the evolution of internet and cloud computing, a new generation of smart APT attacks has also evolved and signature based threat detection systems are proving to be futile and insufficient. One of the essential strategies in detecting APTs is to continuously monitor and analyze various features of a TCP/IP connection, such as the number of transferred packets, the total count of the bytes exchanged, the duration of the TCP/IP connections, and details of the number of packet flows. The current threat detection approaches make extensive use of machine learning algorithms that utilize statistical and behavioral knowledge of the traffic. However, the performance of these algorithms is far from satisfactory in terms of reducing false negatives and false positives simultaneously. Mostly, current algorithms focus on reducing false positives, only. This paper presents a fractal based anomaly classification mechanism, with the goal of reducing both false positives and false negatives, simultaneously. A comparison of the proposed fractal based method with a traditional Euclidean based machine learning algorithm (k-NN) shows that the proposed method significantly outperforms the traditional approach by reducing false positive and false negative rates, simultaneously, while improving the overall classification rates.", "title": "" }, { "docid": "e2459b9991cfda1e81119e27927140c5", "text": "This research demo describes the implementation of a mobile AR-supported educational course application, AR Circuit, which is designed to promote the effectiveness of remote collaborative learning for physics. The application employs the TCP/IP protocol enabling multiplayer functionality in a mobile AR environment. One phone acts as the server and the other acts as the client. The server phone will capture the video frames, process the video frame, and send the current frame and the markers transformation matrices to the client phone.", "title": "" }, { "docid": "f2a2f1e8548cc6fcff6f1d565dfa26c9", "text": "Cabbage contains the glucosinolate sinigrin, which is hydrolyzed by myrosinase to allyl isothiocyanate. Isothiocyanates are thought to inhibit the development of cancer cells by a number of mechanisms. The effect of cooking cabbage on isothiocyanate production from glucosinolates during and after their ingestion was examined in human subjects. Each of 12 healthy human volunteers consumed three meals, at 48-h intervals, containing either raw cabbage, cooked cabbage, or mustard according to a cross-over design. At each meal, watercress juice, which is rich in phenethyl isothiocyanate, was also consumed to allow individual and temporal variation in postabsorptive isothiocyanate recovery to be measured. Volunteers recorded the time and volume of each urination for 24 h after each meal. Samples of each urination were analyzed for N-acetyl cysteine conjugates of isothiocyanates as a measure of entry of isothiocyanates into the peripheral circulation. Excretion of isothiocyanates was rapid and substantial after ingestion of mustard, a source of preformed allyl isothiocyanate. After raw cabbage consumption, allyl isothiocyanate was again rapidly excreted, although to a lesser extent than when mustard was consumed. On the cooked cabbage treatment, excretion of allyl isothiocyanate was considerably less than for raw cabbage, and the excretion was delayed. The results indicate that isothiocyanate production is more extensive after consumption of raw vegetables but that isothiocyanates still arise, albeit to a lesser degree, when cooked vegetables are consumed. The lag in excretion on the cooked cabbage treatment suggests that the colon microflora catalyze glucosinolate hydrolysis in this case.", "title": "" }, { "docid": "30fb0e394f6c4bf079642cd492229b67", "text": "Although modern communications services are susceptible to third-party eavesdropping via a wide range of possible techniques, law enforcement agencies in the US and other countries generally use one of two technologies when they conduct legally-authorized interception of telephones and other communications traffic. The most common of these, designed to comply with the 1994 Communications Assistance for Law Enforcement Act(CALEA), use a standard interface provided in network switches.\n This paper analyzes the security properties of these interfaces. We demonstrate that the standard CALEA interfaces are vulnerable to a range of unilateral attacks by the intercept target. In particular, because of poor design choices in the interception architecture and protocols, our experiments show it is practical for a CALEA-tapped target to overwhelm the link to law enforcement with spurious signaling messages without degrading her own traffic, effectively preventing call records as well as content from being monitored or recorded. We also identify stop-gap mitigation strategies that partially mitigate some of our identified attacks.", "title": "" }, { "docid": "b882d6bc42e34506ba7ab26ed44d9265", "text": "Production datacenters operate under various uncertainties such as tra c dynamics, topology asymmetry, and failures. Therefore, datacenter load balancing schemes must be resilient to these uncertainties; i.e., they should accurately sense path conditions and timely react to mitigate the fallouts. Despite signi cant e orts, prior solutions have important drawbacks. On the one hand, solutions such as Presto and DRB are oblivious to path conditions and blindly reroute at xed granularity. On the other hand, solutions such as CONGA and CLOVE can sense congestion, but they can only reroute when owlets emerge; thus, they cannot always react timely to uncertainties. To make things worse, these solutions fail to detect/handle failures such as blackholes and random packet drops, which greatly degrades their performance. In this paper, we introduce Hermes, a datacenter load balancer that is resilient to the aforementioned uncertainties. At its heart, Hermes leverages comprehensive sensing to detect path conditions including failures unattended before, and it reacts using timely yet cautious rerouting. Hermes is a practical edge-based solution with no switch modi cation. We have implemented Hermes with commodity switches and evaluated it through both testbed experiments and large-scale simulations. Our results show that Hermes achieves comparable performance to CONGA and Presto in normal cases, and well handles uncertainties: under asymmetries, Hermes achieves up to 10% and 20% better ow completion time (FCT) than CONGA and CLOVE; under switch failures, it outperforms all other schemes by over 32%.", "title": "" }, { "docid": "1cd860c1fd2df1a773f2324af324e72a", "text": "Network anomaly detection is an important and dynamic research area. Many network intrusion detection methods and systems (NIDS) have been proposed in the literature. In this paper, we provide a structured and comprehensive overview of various facets of network anomaly detection so that a researcher can become quickly familiar with every aspect of network anomaly detection. We present attacks normally encountered by network intrusion detection systems. We categorize existing network anomaly detection methods and systems based on the underlying computational techniques used. Within this framework, we briefly describe and compare a large number of network anomaly detection methods and systems. In addition, we also discuss tools that can be used by network defenders and datasets that researchers in network anomaly detection can use. We also highlight research directions in network anomaly detection.", "title": "" }, { "docid": "5100ef5ffa501eb7193510179039cd82", "text": "The interplay between caching and HTTP Adaptive Streaming (HAS) is known to be intricate, and possibly detrimental to QoE. In this paper, we make the case for caching-aware rate decision algorithms at the client side which do not require any collaboration with cache or server. To this goal, we introduce the optimization model which allows to compute the optimal rate decisions in the presence of cache, and compare the current main representatives of HAS algorithms (RBA and BBA) to this optimal. This allows us to assess how far from the optimal these versions are, and on which to build a caching-aware rate decision algorithm.", "title": "" }, { "docid": "678bcac5e2cc072ecdd4290ad7f4d769", "text": "Health insurance companies in Brazil have their data about claims organized having the view only for providers. In this way, they loose the physician view and how they share patients. Partnership between physicians can view as a fruitful work in most of the cases but sometimes this could be a problem for health insurance companies and patients, for example a recommendation to visit another physician only because they work in same clinic. Œe focus of the work is to beŠer understand physicians activities and how these activities are represented in the data. Our approach considers three aspects: the relationships among physicians, the relationships between physicians and patients, and the relationships between physicians and health providers. We present the results of an analysis of a claims database (detailing 18 months of activity) from a large health insurance company in Brazil. Œe main contribution presented in this paper is a set of models to represent: mutual referral between physicians, patient retention, and physician centrality in the health insurance network. Our results show the proposed models based on social network frameworks, extracted surprising insights about physicians from real health insurance claims data.", "title": "" }, { "docid": "8edcb0c2c5f4732a8c06121b8d774b44", "text": "We propose a novel scene graph generation model called Graph R-CNN, that is both effective and efficient at detecting objects and their relations in images. Our model contains a Relation Proposal Network (RePN) that efficiently deals with the quadratic number of potential relations between objects in an image. We also propose an attentional Graph Convolutional Network (aGCN) that effectively captures contextual information between objects and relations. Finally, we introduce a new evaluation metric that is more holistic and realistic than existing metrics. We report state-of-the-art performance on scene graph generation as evaluated using both existing and our proposed metrics.", "title": "" }, { "docid": "9c510d7ddeb964c5d762d63d9e284f44", "text": "This paper explains the rationale for the development of reconfigurable manufacturing systems, which possess the advantages both of dedicated lines and of flexible systems. The paper defines the core characteristics and design principles of reconfigurable manufacturing systems (RMS) and describes the structure recommended for practical RMS with RMS core characteristics. After that, a rigorous mathematical method is introduced for designing RMS with this recommended structure. An example is provided to demonstrate how this RMS design method is used. The paper concludes with a discussion of reconfigurable assembly systems. © 2011 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "404fdd6f2d7f1bf69f2f010909969fa9", "text": "Many applications in Multilingual and Multimodal Information Access involve searching large databases of high dimensional data objects with multiple (conditionally independent) views. In this work we consider the problem of learning hash functions for similarity search across the views for such applications. We propose a principled method for learning a hash function for each view given a set of multiview training data objects. The hash functions map similar objects to similar codes across the views thus enabling cross-view similarity search. We present results from an extensive empirical study of the proposed approach which demonstrate its effectiveness on Japanese language People Search and Multilingual People Search problems.", "title": "" }, { "docid": "f1132d786a6384e3c1a6db776922ee69", "text": "The analysis of forensic investigation results has generally been identified as the most complex phase of a digital forensic investigation. This phase becomes more complicated and time consuming as the storage capacity of digital devices is increasing, while at the same time the prices of those devices are decreasing. Although there are some tools and techniques that assist the investigator in the analysis of digital evidence, they do not adequately address some of the serious challenges, particularly with the time and effort required to conduct such tasks. In this paper, we consider the use of semantic web technologies and in particular the ontologies, to assist the investigator in analyzing digital evidence. A novel ontology-based framework is proposed for forensic analysis tools, which we believe has the potential to influence the development of such tools. The framework utilizes a set of ontologies to model the environment under investigation. The evidence extracted from the environment is initially annotated using the Resource Description Framework (RDF). The evidence is then merged from various sources to identify new and implicit information with the help of inference engines and classification mechanisms. In addition, we present the ongoing development of a forensic analysis tool to analyze content retrieved from Android smart phones. For this purpose, several ontologies have been created to model some concepts of the smart phone environment.", "title": "" }, { "docid": "38a18bfce2cb33b390dd7c7cf5a4afd1", "text": "Automatic photo assessment is a high emerging research field with wide useful ‘real-world’ applications. Due to the recent advances in deep learning, one can observe very promising approaches in the last years. However, the proposed solutions are adapted and optimized for ‘isolated’ datasets making it hard to understand the relationship between them and to benefit from the complementary information. Following a unifying approach, we propose in this paper a learning model that integrates the knowledge from different datasets. We conduct a study based on three representative benchmark datasets for photo assessment. Instead of developing for each dataset a specific model, we design and adapt sequentially a unique model which we nominate UNNA. UNNA consists of a deep convolutional neural network, that predicts for a given image three kinds of aesthetic information: technical quality, high-level semantical quality, and a detailed description of photographic rules. Due to the sequential adaptation that exploits the common features between the chosen datasets, UNNA has comparable performances with the state-of-the-art solutions with effectively less parameter. The final architecture of UNNA gives us some interesting indication of the kind of shared features as well as individual aspects of the considered datasets.", "title": "" }, { "docid": "033b05d21f5b8fb5ce05db33f1cedcde", "text": "Seasonal occurrence of the common cutworm Spodoptera litura (Fab.) (Lepidoptera: Noctuidae) moths captured in synthetic sex pheromone traps and associated field population of eggs and larvae in soybean were examined in India from 2009 to 2011. Male moths of S. litura first appeared in late July or early August and continued through October. Peak male trap catches occurred during the second fortnight of September, which was within soybean reproductive stages. Similarly, the first appearance of S. litura egg masses and larval populations were observed after the first appearance of male moths in early to mid-August, and were present in the growing season up to late September to mid-October. The peak appearance of egg masses and larval populations always corresponded with the peak activity of male moths recorded during mid-September in all years. Correlation studies showed that weekly mean trap catches were linearly and positively correlated with egg masses and larval populations during the entire growing season of soybean. Seasonal means of male moth catches in pheromone traps during the 2010 and 2011 seasons were significantly lower than the catches during the 2009 season. However, seasonal means of the egg masses and larval populations were not significantly different between years. Pheromone traps may be useful indicators of the onset of numbers of S. litura eggs and larvae in soybean fields.", "title": "" }, { "docid": "3eec1e9abcb677a4bc8f054fa8827f4f", "text": "We present a neural semantic parser that translates natural language questions into executable SQL queries with two key ideas. First, we develop an encoder-decoder model, where the decoder uses a simple type system of SQL to constraint the output prediction, and propose a value-based loss when copying from input tokens. Second, we explore using the execution semantics of SQL to repair decoded programs that result in runtime error or return empty result. We propose two modelagnostics repair approaches, an ensemble model and a local program repair, and demonstrate their effectiveness over the original model. We evaluate our model on the WikiSQL dataset and show that our model achieves close to state-of-the-art results with lesser model complexity.", "title": "" } ]
scidocsrr
fba44c92f0153a324d800ac71a54c886
Gender Representation in Cinematic Content: A Multimodal Approach
[ { "docid": "e95541d0401a196b03b94dd51dd63a4b", "text": "In the information age, computer applications have become part of modern life and this has in turn encouraged the expectations of friendly interaction with them. Speech, as “the” communication mode, has seen the successful development of quite a number of applications using automatic speech recognition (ASR), including command and control, dictation, dialog systems for people with impairments, translation, etc. But the actual challenge goes beyond the use of speech in control applications or to access information. The goal is to use speech as an information source, competing, for example, with text online. Since the technology supporting computer applications is highly dependent on the performance of the ASR system, research into ASR is still an active topic, as is shown by the range of research directions suggested in (Baker et al., 2009a, 2009b). Automatic speech recognition – the recognition of the information embedded in a speech signal and its transcription in terms of a set of characters, (Junqua & Haton, 1996) – has been object of intensive research for more than four decades, achieving notable results. It is only to be expected that speech recognition advances make spoken language as convenient and accessible as online text when the recognizers reach error rates near zero. But while digit recognition has already reached a rate of 99.6%, (Li, 2008), the same cannot be said of phone recognition, for which the best rates are still under 80% 1,(Mohamed et al., 2011; Siniscalchi et al., 2007). Speech recognition based on phones is very attractive since it is inherently free from vocabulary limitations. Large Vocabulary ASR (LVASR) systems’ performance depends on the quality of the phone recognizer. That is why research teams continue developing phone recognizers, in order to enhance their performance as much as possible. Phone recognition is, in fact, a recurrent problem for the speech recognition community. Phone recognition can be found in a wide range of applications. In addition to typical LVASR systems like (Morris & Fosler-Lussier, 2008; Scanlon et al., 2007; Schwarz, 2008), it can be found in applications related to keyword detection, (Schwarz, 2008), language recognition, (Matejka, 2009; Schwarz, 2008), speaker identification, (Furui, 2005) and applications for music identification and translation, (Fujihara & Goto, 2008; Gruhne et al., 2007). The challenge of building robust acoustic models involves applying good training algorithms to a suitable set of data. The database defines the units that can be trained and", "title": "" }, { "docid": "9a5e04b2a6b8e81591a602b0dd81fa10", "text": "Direct content analysis reveals important details about movies including those of gender representations and potential biases. We investigate the differences between male and female character depictions in movies, based on patterns of language used. Specifically, we use an automatically generated lexicon of linguistic norms characterizing gender ladenness. We use multivariate analysis to investigate gender depictions and correlate them with elements of movie production. The proposed metric differentiates between male and female utterances and exhibits some interesting interactions with movie genres and the screenplay writer gender.", "title": "" } ]
[ { "docid": "06e3d228e9fac29dab7180e56f087b45", "text": "Curiosity is thought to be an intrinsically motivated driving force for seeking information. Thus, the opportunity for an information gain (IG) should instil curiosity in humans and result in information gathering actions. To investigate if, and how, information acts as an intrinsic reward, a search task was set in a context of blurred background images which could be revealed by iterative clicking. The search task was designed such that it prevented efficient IG about the underlying images. Participants therefore had to trade between clicking regions with high search target probability or high expected image content information. Image content IG was established from “information-maps” based on participants exploration with the intention of understanding (1) the main theme of the image and (2) how interesting the image might appear to others. Note that IG is in this thesis not identical with the information theoretic concept of information gain, the quantities are however probably related. It was hypothesised that participants would be distracted by visually informative regions and that images independently rated as more interesting would yield higher image based IG. It was also hypothesised that image based IG would increase as a function of time. Results show that participants sometimes explored images driven by curiosity, and that there was considerable individual variation in which images participants were curious about. Independent interest ratings did not account for image based IG. The level of IG increased over trials, interestingly without affecting participants’ performance on the visual search task designed to prevent IG. Results support that IG is rewarding as participants learned to optimize IG over trials without compromising performance on the extrinsically motivated search; managing to both keep the cake and eat it.", "title": "" }, { "docid": "ba590a4ae3bab635a07054860222744a", "text": "Interactive Strategy Training for Active Reading and Thinking (iSTART) is a Web-based application that provides young adolescent to college-age students with high-level reading strategy training to improve comprehension of science texts. iSTART is modeled after an effective, human-delivered intervention called self-explanation reading training (SERT), which trains readers to use active reading strategies to self-explain difficult texts more effectively. To make the training more widely available, the Web-based trainer has been developed. Transforming the training from a human-delivered application to a computer-based one has resulted in a highly interactive trainer that adapts its methods to the performance of the students. The iSTART trainer introduces the strategies in a simulated classroom setting with interaction between three animated characters-an instructor character and two student characters-and the human trainee. Thereafter, the trainee identifies the strategies in the explanations of a student character who is guided by an instructor character. Finally, the trainee practices self-explanation under the guidance of an instructor character. We describe this system and discuss how appropriate feedback is generated.", "title": "" }, { "docid": "88128ec1201e2202f13f2c09da0f07f2", "text": "A new mechanism is proposed for exciting the magnetic state of a ferromagnet. Assuming ballistic conditions and using WKB wave functions, we predict that a transfer of vectorial spin accompanies an electric current flowing perpendicular to two parallel magnetic films connected by a normal metallic spacer. This spin transfer drives motions of the two magnetization vectors within their instantaneously common plane. Consequent new mesoscopic precession and switching phenomena with potential applications are predicted. PACS: 75.50.Rr; 75.70.Cn A magnetic multilayer (MML) is composed of alternating ferromagnetic and paramagnetic sublayers whose thicknesses usually range between 1 and l0 nm. The discovery in 1988 of gian t magne tore s i s tance (GMR) in such multilayers stimulates much current research [1]. Although the initial reports dealt with currents flowing in the layer planes (CIP), the magnetoresistive phenomenon is known to be even stronger for currents flowing perpendicular to the plane (CPP) [2]. We predict here that the spinpolarized nature of such a perpendicular current generally creates a mutual transference of spin angular momentum between the magnetic sublayers which is manifested in their dynamic response. This response, which occurs only for CPP geometry, we propose to characterize as spin transfer . It can dominate the Larmor response to the magnetic field induced by * Fax: + 1-914-945-3291; email: slon@watson.ibm.com. the current when the magnetic sublayer thickness is about 1 nm and the smaller of its other two dimensions is less than 10= to 10 3 r im. On this mesoscopic scale, two new phenomena become possible: a steady precession driven by a constant current, and alternatively a novel form of switching driven by a pulsed current. Other forms of current-driven magnetic response without the use of any electromagnetically induced magnetic field are already known. Reports of both theory and experiments show how the exchange effect of external current flowing through a ferromagnetic domain wall causes it to move [3]. Even closer to the present subject is the magnetic response to tunneling current in the case of the sandwich structure f e r r o m a g n e t / i n s u l a t o r / f e r r o m a g n e t ( F / I / F ) predicted previously [4]. Unfortunately, theoretical relations indicated that the dissipation of energy, and therefore temperature rise, needed to produce more than barely observable spin-transfer through a tunneling barrier is prohibitively large. 0304-8853/96/$15.00 Copyright © 1996 Elsevier Science B.V. All rights reserved. PH S0304-8853(96)00062-5 12 ,/.C, Slo,cgewski / Journal of Magnetism and Magnetic Materials 159 (1996) L/ L7 However. the advent of multilayers incorporating very thin paramagnetic metallic spacers, rather than a barrier, places the realization of spin transfer in a different light. In the first place, the metallic spacer implies a low resistance and therefore low Ohmic dissipation for a given current, to which spin-transfer effects are proportional. Secondly, numerous experiments [5] and theories [6] show that the fundamental interlayer exchange coupling of RKKY type diminishes in strength and varies in sign as spacer thickness increases. Indeed, there exist experimental spacers which are thick enough (e.g. 4 nm) for the exchange coupling to be negligible even though spin relaxation is too weak to significantly diminish the GMR effect which relies on preservation of spin direction during electron transit across the spacer. Moreover, the same fact of long spin relaxation time in magnetic multilayers is illustrated on an even larger distance scale, an order of magnitude greater than the circa 10 nm electron mean free path, by spin injection experiments [7]. It follows, as we show below, that interesting current-driven spin-transfer effects are expected under laboratory conditions involving very small distance scales. We begin with simple arguments to explain current-driven spin transfer and establish its physical scale. We then sketch a detailed treatment and summarize its results. Finally, we predict two spin-transfer phenomena: steady magnetic precession driven by a constant current and a novel form of magnetic switching. We consider the five metallic regions represented schematically in Fig. 1. Layers A, B, and C are paramagnetic, whilst F I and F2 are ferromagnetic. The instantaneous macroscopic vectors hS~ and kS 2 forming the included angle 0 represent the respective total spin momenta per unit area of the ferromagnets. Now consider a flow of electrons moving rightward through the sandwich. The works on spin injection [7] show that if the thickness of spacer B is less than the spin-diffusion length, usually at least 100 nm, then some degree of spin polarization along the instantaneous axis parallel to the vector S~ of local ferromagnetic polarization in FI will be present in the electrons impinging on F2. This leads us to consider a three-layer (B, F2, C in Fig. 1) model in which an electron with initial spin state along the direction Sj is incident from S i ~ i S2 ~, EF=0J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "title": "" }, { "docid": "68257960bdbc6c4f326108ee7ba3e756", "text": "In computer vision pixelwise dense prediction is the task of predicting a label for each pixel in the image. Convolutional neural networks achieve good performance on this task, while being computationally efficient. In this paper we carry these ideas over to the problem of assigning a sequence of labels to a set of speech frames, a task commonly known as framewise classification. We show that dense prediction view of framewise classification offers several advantages and insights, including computational efficiency and the ability to apply batch normalization. When doing dense prediction we pay specific attention to strided pooling in time and introduce an asymmetric dilated convolution, called time-dilated convolution, that allows for efficient and elegant implementation of pooling in time. We show that by using time-dilated convolutions with a very deep VGG-style CNN with batch normalization, we achieve best published single model accuracy result on the switchboard-2000 benchmark dataset.", "title": "" }, { "docid": "90813d00050fdb1b8ce1a9dffe858d46", "text": "Background: Diabetes mellitus is associated with biochemical and pathological alterations in the liver. The aim of this study was to investigate the effects of apple cider vinegar (ACV) on serum biochemical markers and histopathological changes in the liver of diabetic rats for 30 days. Effects were evaluated using streptozotocin (STZ)-induced diabetic rats as an experimental model. Materials and methods: Diabetes mellitus was induced by a single dose of STZ (65 mg/kg) given intraperitoneally. Thirty wistar rats were divided into three groups: control group, STZ-treated group and STZ plus ACV treated group (2 ml/kg BW). Animals were sacrificed 30 days post treatment. Results: Biochemical results indicated that, ACV caused a significant decrease in glucose, TC, LDL-c and a significant increase in HDL-c. Histopathological examination of the liver sections of diabetic rats showed fatty changes in the cytoplasm of the hepatocytes in the form of accumulation of lipid droplets, lymphocytic infiltration. Electron microscopic studies revealed aggregations of polymorphic mitochondria with apparent loss of their cristae and condensed matrices. Besides, the rough endoplasmic reticulum was proliferating and fragmented into smaller stacks. The cytoplasm of the hepatocytes exhibited vacuolations and displayed a large number of lipid droplets of different sizes. On the other hand, the liver sections of diabetic rats treated with ACV showed minimal toxic effects due to streptozotocin. These ultrastructural results revealed that treatment of diabetic rats with ACV led to apparent recovery of the injured hepatocytes. In prophetic medicine, Prophet Muhammad peace is upon him strongly recommended eating vinegar in the Prophetic Hadeeth: \"vinegar is the best edible\". Conclusion: This study showed that ACV, in early stages of diabetes inductioncan decrease the destructive progress of diabetes and cause hepatoprotection against the metabolic damages resulting from streptozotocininduced diabetes mellitus.", "title": "" }, { "docid": "703696ca3af2a485ac34f88494210007", "text": "Cells navigate environments, communicate and build complex patterns by initiating gene expression in response to specific signals. Engineers seek to harness this capability to program cells to perform tasks or create chemicals and materials that match the complexity seen in nature. This Review describes new tools that aid the construction of genetic circuits. Circuit dynamics can be influenced by the choice of regulators and changed with expression 'tuning knobs'. We collate the failure modes encountered when assembling circuits, quantify their impact on performance and review mitigation efforts. Finally, we discuss the constraints that arise from circuits having to operate within a living cell. Collectively, better tools, well-characterized parts and a comprehensive understanding of how to compose circuits are leading to a breakthrough in the ability to program living cells for advanced applications, from living therapeutics to the atomic manufacturing of functional materials.", "title": "" }, { "docid": "3f0d37296258c68a20da61f34364405d", "text": "Need to develop human body's posture supervised robots, gave the push to researchers to think over dexterous design of exoskeleton robots. It requires to develop quantitative techniques to assess motor function and generate the command for the robots to act accordingly with complex human structure. In this paper, we present a new technique for the upper limb power exoskeleton robot in which load is gripped by the human subject and not by the robot while the robot assists. Main challenge is to find non-biological signal based human desired motion intention to assist as needed. For this purpose, we used newly developed Muscle Circumference Sensor (MCS) instead of electromyogram (EMG) sensors. MCS together with the force sensors is used to estimate the human interactive force from which desired human motion is extracted using adaptive Radial Basis Function Neural Network (RBFNN). Developed Upper limb power exoskeleton has seven degrees of freedom (DOF) in which five DOF are passive while two are active. Active joints include shoulder and elbow in Sagittal plane while abduction and adduction motion in shoulder joint is provided by the passive joints. To ensure high quality performance model reference based adaptive impedance controller is employed. Exoskeleton performance is evaluated experimentally by a neurologically intact subject which validates the effectiveness.", "title": "" }, { "docid": "3079e9dc5846c73c57f8d7fbf35d94a1", "text": "Data mining techniques is rapidly increasing in the research of educational domains. Educational data mining aims to discover hidden knowledge and patterns about student performance. This paper proposes a student performance prediction model by applying two classification algorithms: KNN and Naïve Bayes on educational data set of secondary schools, collected from the ministry of education in Gaza Strip for 2015 year. The main objective of such classification may help the ministry of education to improve the performance due to early prediction of student performance. Teachers also can take the proper evaluation to improve student learning. The experimental results show that Naïve Bayes is better than KNN by receiving the highest accuracy value of 93.6%.", "title": "" }, { "docid": "f5f70dca677752bcaa39db59988c088e", "text": "To examine how inclusive our schools are after 25 years of educational reform, students with disabilities and their parents were asked to identify current barriers and provide suggestions for removing those barriers. Based on a series of focus group meetings, 15 students with mobility limitations (9-15 years) and 12 parents identified four categories of barriers at their schools: (a) the physical environment (e.g., narrow doorways, ramps); (b) intentional attitudinal barriers (e.g., isolation, bullying); (c) unintentional attitudinal barriers (e.g., lack of knowledge, understanding, or awareness); and (d) physical limitations (e.g., difficulty with manual dexterity). Recommendations for promoting accessibility and full participation are provided and discussed in relation to inclusive education efforts. Exceptional Children", "title": "" }, { "docid": "6bfc3d00fe6e9fcdb09ad8993b733dfd", "text": "This article presents the upper-torso design issue of Affeto who can physically interact with humans, which biases the perception of affinity beyond the uncanny valley effect. First, we review the effect and hypothesize that the experience of physical interaction with Affetto decreases the effect. Then, the reality of physical existence is argued with existing platforms. Next, the design concept and a very preliminary experiment are shown. Finally, future issues are given. I. THE UNCANNY VALLEY REVISITED The term “Uncanny” is a translation of Freud’s term “Der Unheimliche” and applied to a phenomenon noted by Masahiro Mori who mentioned that the presence of movement steepens the slopes of the uncanny valley (Figure 2 in [1]). Several studies on this effect can be summarised as follows1. 1) Multimodal impressions such as visual appearance, body motion, sounds (speech and others), and tactile sensation should be congruent to decrease the valley steepness. 2) Antipathetic expressions may exaggerate the valley effect. The current technologies enable us to minimize the gap caused by mismatch among cross-modal factors. Therefore, the valley effect is expected to be reduced gradually. For example, facial expressions and tactile sensations of Affetto [2] are realistic and congruent due to baby-like face skin mask of urethane elastomer gel (See Figure 1). Generated facial expressions almost conquered the uncanny valley. Further, baby-like facial expressions may contribute to the reduction of the valley effect due to 2). In addition to these, we suppose that the motor experience of physical interactions with robots biases the perception of affinity as motor experiences biases the perception of movements [3]. To verify this hypothesis, Affetto needs its body which realizes physical interactions naturally. The rest of this article is organized as follows. The next section argues about the reality of physical existence with existing platforms. Then, the design concept and a very preliminary experiment are shown, and the future issues are given.", "title": "" }, { "docid": "f0365424e98ebcc0cb06ce51f65cbe7c", "text": "The most important milestone in the field of magnetic sensors was that AMR sensors started to replace Hall sensors in many application, were larger sensitivity is an advantage. GMR and SDT sensor finally found limited applications. We also review the development in miniaturization of fluxgate sensors and briefly mention SQUIDs, resonant sensors, GMIs and magnetomechanical sensors.", "title": "" }, { "docid": "316ead33d0313804b7aa95570427e375", "text": "We consider an optimal control problem with a deterministic finite horizon and state variable dynamics given by a Markovswitching jump-diffusion stochastic differential equation. Our main results extend the dynamic programming technique to this larger family of stochastic optimal control problems. More specifically, we provide a detailed proof of Bellman’s optimality principle (or dynamic programming principle) and obtain the corresponding Hamilton-Jacobi-Belman equation, which turns out to be a partial integro-differential equation due to the extra terms arising from the Lévy process and the Markov process. As an application of our results, we study a finite horizon consumptioninvestment problem for a jump-diffusion financial market consisting of one risk-free asset and one risky asset whose coefficients are assumed to depend on the state of a continuous time finite state Markov process. We provide a detailed study of the optimal strategies for this problem, for the economically relevant families of power utilities and logarithmic utilities.", "title": "" }, { "docid": "784c7c785b2e47fad138bba38b753f31", "text": "A local linear wavelet neural network (LLWNN) is presented in this paper. The difference of the network with conventional wavelet neural network (WNN) is that the connection weights between the hidden layer and output layer of conventional WNN are replaced by a local linear model. A hybrid training algorithm of particle swarm optimization (PSO) with diversity learning and gradient descent method is introduced for training the LLWNN. Simulation results for the prediction of time-series show the feasibility and effectiveness of the proposed method. r 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f1977e5f8fbc0df4df0ac6bf1715c254", "text": "Instabilities in MOS-based devices with various substrates ranging from Si, SiGe, IIIV to 2D channel materials, can be explained by defect levels in the dielectrics and non-radiative multi-phonon (NMP) barriers. However, recent results obtained on single defects have demonstrated that they can show a highly complex behaviour since they can transform between various states. As a consequence, detailed physical models are complicated and computationally expensive. As will be shown here, as long as only lifetime predictions for an ensemble of defects is needed, considerable simplifications are possible. We present and validate an oxide defect model that captures the essence of full physical models while reducing the complexity substantially. We apply this model to investigate the improvement in positive bias temperature instabilities due to a reliability anneal. Furthermore, we corroborate the simulated defect bands with prior defect-centric studies and perform lifetime projections.", "title": "" }, { "docid": "7303f634355e24f0dba54daa29ed2737", "text": "A power divider/combiner based on a double sided slotted waveguide geometry suitable for Ka-band applications is proposed. This structure allows up to 50% reduction of the total device length compared to previous designs of this type without compromising manufacturing complexity or combining efficiency. Efficient design guidelines based on an equivalent circuit technique are provided and the performance is demonstrated by means of a 12-way divider/combiner prototype operating in the range 29-31 GHz. Numerical simulations show that back to back insertion loss of 1.19 dB can be achieved, corresponding to a combining efficiency of 87%. The design is validated by means of manufacturing and testing an experimental prototype with measured back-to-back insertion loss of 1.83 dB with a 3 dB bandwidth of 20.8%, corresponding to a combining efficiency of 81%.", "title": "" }, { "docid": "c30f721224317a41c1e316c158549d81", "text": "The oxysterol receptor LXR is a key transcriptional regulator of lipid metabolism. LXR increases expression of SREBP-1, which in turn regulates at least 32 genes involved in lipid synthesis and transport. We recently identified 25-hydroxycholesterol-3-sulfate (25HC3S) as an important regulatory molecule in the liver. We have now studied the effects of 25HC3S and its precursor, 25-hydroxycholesterol (25HC), on lipid metabolism as mediated by the LXR/SREBP-1 signaling in macrophages. Addition of 25HC3S to human THP-1-derived macrophages markedly decreased nuclear LXR protein levels. 25HC3S administration was followed by dose- and time-dependent decreases in SREBP-1 mature protein and mRNA levels. 25HC3S decreased the expression of SREBP-1-responsive genes, acetyl-CoA carboxylase-1, and fatty acid synthase (FAS) as well as HMGR and LDLR, which are key proteins involved in lipid metabolism. Subsequently, 25HC3S decreased intracellular lipids and increased cell proliferation. In contrast to 25HC3S, 25HC acted as an LXR ligand, increasing ABCA1, ABCG1, SREBP-1, and FAS mRNA levels. In the presence of 25HC3S, 25HC, and LXR agonist T0901317, stimulation of LXR targeting gene expression was repressed. We conclude that 25HC3S acts in macrophages as a cholesterol satiety signal, downregulating cholesterol and fatty acid synthetic pathways via inhibition of LXR/SREBP signaling. A possible role of oxysterol sulfation is proposed.", "title": "" }, { "docid": "33e45b66cca92f15270500c32a1c0b94", "text": "We study a dataset of billions of program binary files that appeared on 100 million computers over the course of 12 months, discovering that 94% of these files were present on a single machine. Though malware polymorphism is one cause for the large number of singleton files, additional factors also contribute to polymorphism, given that the ratio of benign to malicious singleton files is 80:1. The huge number of benign singletons makes it challenging to reliably identify the minority of malicious singletons. We present a large-scale study of the properties, characteristics, and distribution of benign and malicious singleton files. We leverage the insights from this study to build a classifier based purely on static features to identify 92% of the remaining malicious singletons at a 1.4% percent false positive rate, despite heavy use of obfuscation and packing techniques by most malicious singleton files that we make no attempt to de-obfuscate. Finally, we demonstrate robustness of our classifier to important classes of automated evasion attacks.", "title": "" }, { "docid": "7e2f657115b3c9163a7fe9b34d95a314", "text": "Even though several youth fatal suicides have been linked with school victimization, there is lack of evidence on whether cyberbullying victimization causes students to adopt suicidal behaviors. To investigate this issue, I use exogenous state-year variation in cyberbullying laws and information on high school students from the Youth Risk Behavioral Survey within a bivariate probit framework, and complement these estimates with matching techniques. I find that cyberbullying has a strong impact on all suicidal behaviors: it increases suicidal thoughts by 14.5 percentage points and suicide attempts by 8.7 percentage points. Even if the focus is on statewide fatal suicide rates, cyberbullying still leads to significant increases in suicide mortality, with these effects being stronger for men than for women. Since cyberbullying laws have an effect on limiting cyberbullying, investing in cyberbullying-preventing strategies can improve individual health by decreasing suicide attempts, and increase the aggregate health stock by decreasing suicide rates.", "title": "" }, { "docid": "f636eb06a1158f4593ce8027d6f274e7", "text": "Various modifications of bagging for class imbalanced data are discussed. An experimental comparison of known bagging modifications shows that integrating with undersampling is more powerful than oversampling. We introduce Local-and-Over-All Balanced bagging where probability of sampling an example is tuned according to the class distribution inside its neighbourhood. Experiments indicate that this proposal is competitive to best undersampling bagging extensions.", "title": "" }, { "docid": "bffbc725b52468b41c53b156f6eadedb", "text": "This paper presents the design and experimental evaluation of an underwater robot that is propelled by a pair of lateral undulatory fins, inspired by the locomotion of rays and cuttlefish. Each fin mechanism is comprised of three individually actuated fin rays, which are interconnected by an elastic membrane. An on-board microcontroller generates the rays’ motion pattern that result in the fins’ undulations, through which propulsion is generated. The prototype, which is fully untethered and energetically autonomous, also integrates an Inertial Measurement Unit for navigation purposes, a wireless communication module, and a video camera for recording underwater footage. Due to its small size and low manufacturing cost, the developed prototype can also serve as an educational platform for underwater robotics.", "title": "" } ]
scidocsrr
33a1e80708d5470183237eedb4142b68
Interleaved Converter With Voltage Multiplier Cell for High Step-Up and High-Efficiency Conversion
[ { "docid": "2048695744ff2a7905622dfe671ddb88", "text": "Many applications call for high step-up dc–dc converters that do not require isolation. Some dc–dc converters can provide high step-up voltage gain, but with the penalty of either an extreme duty ratio or a large amount of circulating energy. DC–DC converters with coupled inductors can provide high voltage gain, but their efficiency is degraded by the losses associated with leakage inductors. Converters with active clamps recycle the leakage energy at the price of increasing topology complexity. A family of high-efficiency, high step-up dc–dc converters with simple topologies is proposed in this paper. The proposed converters, which use diodes and coupled windings instead of active switches to realize functions similar to those of active clamps, perform better than their active-clamp counterparts. High efficiency is achieved because the leakage energy is recycled and the output rectifier reverse-recovery problem is alleviated.", "title": "" } ]
[ { "docid": "1007cd10c262718fe108c9ddb0df1091", "text": "Shalgam juice, hardaliye, boza, ayran (yoghurt drink) and kefir are the most known traditional Turkish fermented non-alcoholic beverages. The first three are obtained from vegetables, fruits and cereals, and the last two ones are made of milk. Shalgam juice, hardaliye and ayran are produced by lactic acid fermentation. Their microbiota is mainly composed of lactic acid bacteria (LAB). Lactobacillus plantarum, Lactobacillus brevis and Lactobacillus paracasei subsp. paracasei in shalgam fermentation and L. paracasei subsp. paracasei and Lactobacillus casei subsp. pseudoplantarum in hardaliye fermentation are predominant. Ayran is traditionally prepared by mixing yoghurt with water and salt. Yoghurt starter cultures are used in industrial ayran production. On the other hand, both alcohol and lactic acid fermentation occur in boza and kefir. Boza is prepared by using a mixture of maize, wheat and rice or their flours and water. Generally previously produced boza or sourdough/yoghurt are used as starter culture which is rich in Lactobacillus spp. and yeasts. Kefir is prepared by inoculation of raw milk with kefir grains which consists of different species of yeasts, LAB, acetic acid bacteria in a protein and polysaccharide matrix. The microbiota of boza and kefir is affected from raw materials, the origin and the production methods. In this review, physicochemical properties, manufacturing technologies, microbiota and shelf life and spoilage of traditional fermented beverages were summarized along with how fermentation conditions could affect rheological properties of end product which are important during processing and storage.", "title": "" }, { "docid": "1862a9fa9db1fa4b4f2c34873686f190", "text": "This paper surveys the work of the qualitative spatial reasoning group at the University of Leeds. The group has developed a number of logical calculi for representing and reasoning with qualitative spatial relations over regions. We motivate the use of regions as the primary spatial entity and show how a rich language can be built up from surprisingly few primitives. This language can distinguish between convex and a variety of concave shapes and there is also an extension which handles regions with uncertain boundaries. We also present a variety of reasoning techniques, both for static and dynamic situations. A number of possible application areas are briefly mentioned.", "title": "" }, { "docid": "f44e541e7f1f5c41f4b913afa2835fc5", "text": "RDF, the high calorific value fraction of MSW obtained by conventional separation systems, can be employed in technological plants (mainly cement kilns) in order to obtain a useful energy recovery. It is interesting and important to evaluate this possibility within the general framework of waste-to-energy solutions. The solution must be assessed on the basis of different aspects, namely: technological features and clinker characteristics; local atmospheric pollution; the effects of RDF used in cement kilns on the generation of greenhouse gases; the economics of conventional solid fuels substitution and planning perspectives, from the point of view of the destination of RDF and optimal cement kiln policy. The different experiences of this issue throughout Europe are reviewed, and some applications within Italy are also been considered. The main findings of the study are that the use of RDF in cement kilns instead of coal or coke offers environmental benefits in terms of greenhouse gases, while the formation of conventional gaseous pollutants is not a critical aspect. Indeed, the generation of nitrogen oxides can probably be lower because of lower flame temperatures or lower air excess. The presence of chlorinated micro-pollutants is not influenced by the presence of RDF in fuel, whereas depending on the quality of the RDF, some problems could arise compared to the substituted fuel as far as heavy metals are concerned, chiefly the more volatile ones.", "title": "" }, { "docid": "4dcaed57f837c76137518b52b38c0eab", "text": "We describe a simple modification of neural networks which consists in extending the commonly used linear layer structure to an arbitrary graph structure. This allows us to combine the benefits of convolutional neural networks with the benefits of regular networks. The joint model has only a small increase in parameter size and training and decoding time are virtually unaffected. We report significant improvements over very strong baselines on two LVCSR tasks and one speech activity detection task.", "title": "" }, { "docid": "9a55e9bafa98f01a0ea7f36a9764f8c2", "text": "AIM\nTo determine socio-demographic features and criminal liability of individuals who committed filicide in Turkey.\n\n\nMETHOD\nThe study involved 85 cases of filicide evaluated by the 4th Specialized Board of the Institute of Forensic Medicine in Istanbul in the 1995-2000 period. We assessed the characteristics of parents who committed filicide (age, sex, education level, employment status, and criminal liability) and children victims (age, sex, own or stepchild), as well as the causes of death.\n\n\nRESULTS\nThere were 85 parents who committed filicide (41 fathers and 44 mothers) and 96 children victims. The mean age of mothers who committed filicide (52% of filicides) was 26.5-/+7.7 years, and the mean age of fathers (48% of filicides) was 36.1-/+10.0 years (t=-5.00, p<0.001). Individuals diagnosed with psychiatric disturbances, such as schizophrenia (61%), major depression (22%), imbecility (10%), and mild mental retardation (7%), were not subject to criminal liability. Almost half of parents who committed filicide were unemployed and illiterate.\n\n\nCONCLUSION\nFilicide in Turkey was equally committed by mothers and fathers. More than half of the parents were diagnosed with psychiatric disorders and came from disadvantageous socioeconomic environments, where unemployment and illiteracy rates are highly above the average of Turkey.", "title": "" }, { "docid": "06ca9b3cdeeae59e67d25235ee410f73", "text": "Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA’s machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance. * Corresponding author", "title": "" }, { "docid": "7b385edcbb0e3fa5bfffca2e1a9ecf13", "text": "A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.", "title": "" }, { "docid": "9aab4a607de019226e9465981b82f9b8", "text": "Color is frequently used to encode values in visualizations. For color encodings to be effective, the mapping between colors and values must preserve important differences in the data. However, most guidelines for effective color choice in visualization are based on either color perceptions measured using large, uniform fields in optimal viewing environments or on qualitative intuitions. These limitations may cause data misinterpretation in visualizations, which frequently use small, elongated marks. Our goal is to develop quantitative metrics to help people use color more effectively in visualizations. We present a series of crowdsourced studies measuring color difference perceptions for three common mark types: points, bars, and lines. Our results indicate that peoples' abilities to perceive color differences varies significantly across mark types. Probabilistic models constructed from the resulting data can provide objective guidance for designers, allowing them to anticipate viewer perceptions in order to inform effective encoding design.", "title": "" }, { "docid": "2b6c016395d92ef20c4e316a35a7ecb8", "text": "Recently, the low-cost Microsoft Kinect sensor, which can capture real-time high-resolution RGB and depth visual information, has attracted increasing attentions for a wide range of applications in computer vision. Existing techniques extract hand-tuned features from the RGB and the depth data separately and heuristically fuse them, which would not fully exploit the complementarity of both data sources. In this paper, we introduce an adaptive learning methodology to automatically extract (holistic) spatio-temporal features, simultaneously fusing the RGB and depth information, from RGBD video data for visual recognition tasks. We address this as an optimization problem using our proposed restricted graph-based genetic programming (RGGP) approach, in which a group of primitive 3D operators are first randomly assembled as graph-based combinations and then evolved generation by generation by evaluating on a set of RGBD video samples. Finally the best-performed combination is selected as the (near-)optimal representation for a pre-defined task. The proposed method is systematically evaluated on a new hand gesture dataset, SKIG, that we collected ourselves and the public MSRDailyActivity3D dataset, respectively. Extensive experimental results show that our approach leads to significant advantages compared with state-of-the-art handcrafted and machine-learned features.", "title": "" }, { "docid": "4ad169a555cce3617b6ec30eff38bd6e", "text": "This work introduces Passphone, a new smartphone-based authentication scheme that outsources user verification to a trusted third party without sacrificing privacy: neither can the trusted third party learn the relation between users and service providers, nor can service providers learn those of their users to others. When employed as a second factor in conjunction with, for instance, passwords as a first factor, our scheme maximizes the deployability of two-factor authentication for service providers while maintaining user privacy. We conduct a twofold formal analysis of our scheme, the first regarding its general security, and the second regarding anonymity and unlinkability of its users. Moreover, we provide an automatic analysis using AVISPA, a comparative evaluation to existing schemes under Bonneau et al.’s framework, and an evaluation of a prototypical implementation.", "title": "" }, { "docid": "e4ca92179277334d9113a5580be37998", "text": "This paper presents a systematic design approach for low-profile UWB body-of-revolution (BoR) monopole antennas with specified radiation objectives and size constraints. The proposed method combines a random walk scheme, the genetic algorithm, and a BoR moment method analysis for antenna shape optimization. A weighted global cost function, which minimizes the difference between potential optimal points and a utopia point (optimal design combining 3 different objectives) within the criterion space, is adapted. A 24'' wide and 6'' tall aperture was designed operating from low VHF frequencies up to 2 GHz. This optimized antenna shape reaches -15 dBi gain at 41 MHz on a ground plane and is only λ/12 in aperture width and λ/50 in height at this frequency. The same antenna achieves VSWR <; 3 from 210 MHz up to at least 2 GHz. Concurrently, it maintains a realized gain of ~5 dBi with moderate oscillations across the band of interest. A resistive treatment was further applied at the top antenna rim to improve matching and pattern stability. Measurements are provided for validation of the design. Of importance is that the optimized aperture delivers a larger impedance bandwidth as well as more uniform gain and pattern when compared to a previously published inverted-hat antenna of the same size.", "title": "" }, { "docid": "e20dbb2dfb6820d27fc1639b8ea1393d", "text": "A novel high step-up dc-dc converter with coupled-inductor and switched-capacitor techniques is proposed in this paper. The capacitors are charged in parallel and are discharged in series by the coupled inductor, stacking on the output capacitor. Thus, the proposed converter can achieve high step-up voltage gain with appropriate duty ratio. Besides, the voltage spike on the main switch can be clamped. Therefore, low on-state resistance RDS(ON) of the main switch can be adopted to reduce the conduction loss. The efficiency can be improved. The operating principle and steady-state analyses are discussed in detail. Finally, a prototype circuit with 24-V input voltage, 400-V output voltage, and 200-W output power is implemented in the laboratory. Experiment results confirm the analysis and advantages of the proposed converter.", "title": "" }, { "docid": "a635a390498d068e9664a9da37eaa6b9", "text": "This paper studies two refinements to the method of factor forecasting. First, we consider the method of quadratic principal components that allows the link function between the predictors and the factors to be non-linear. Second, the factors used in the forecasting equation are estimated in a way to take into account that the goal is to forecast a specific series. This is accomplished by applying the method of principal components to ‘targeted predictors’ selected using hard and soft thresholding rules. Our three main findings can be summarized as follows. First, we find improvements at all forecast horizons over the current diffusion index forecasts by estimating the factors using fewer but informative predictors. Allowing for non-linearity often leads to additional gains. Second, forecasting the volatile one month ahead inflation warrants a high degree of targeting to screen out the noisy predictors. A handful of variables, notably relating to housing starts and interest rates, are found to have systematic predictive power for inflation at all horizons. Third, the variables chosen as targeted predictors selected by both soft and hard thresholding changes with the forecast horizon and the sample period. Holding the set of predictors fixed as is the current practice of factor forecasting is unnecessarily restrictive. ∗Department of Economics, NYU, 269 Mercer St, New York, NY 10003 Email: Jushan.Bai@nyu.edu. †Department of Economics, University of Michigan, Ann Arbor, MI 48109 Email: Serena.Ng@umich.edu We would like to thank Jeremy Piger (discussant) and conference participants for helpful comments. We also acknowledge financial support from the NSF (grants SES-0137084, SES-0136923, SES-0549978)", "title": "" }, { "docid": "7da0a472f0a682618eccbfd4229ca14f", "text": "A Search Join is a join operation which extends a user-provided table with additional attributes based on a large corpus of heterogeneous data originating from the Web or corporate intranets. Search Joins are useful within a wide range of application scenarios: Imagine you are an analyst having a local table describing companies and you want to extend this table with attributes containing the headquarters, turnover, and revenue of each company. Or imagine you are a film enthusiast and want to extend a table describing films with attributes like director, genre, and release date of each film. This article presents the Mannheim Search Join Engine which automatically performs such table extension operations based on a large corpus of Web data. Given a local table, the Mannheim Search Join Engine searches the corpus for additional data describing the entities contained in the input table. The discovered data is then joined with the local table and is consolidated using schema matching and data fusion techniques. As result, the user is presented with an extended table and given the opportunity to examine the provenance of the added data. We evaluate the Mannheim Search Join Engine using heterogeneous data originating from over one million different websites. The data corpus consists of HTML tables, as well as Linked Data and Microdata annotations which are converted into tabular form. Our experiments show that the Mannheim Search Join Engine achieves a coverage close to 100% and a precision of around 90% for the tasks of extending tables describing cities, companies, countries, drugs, books, films, and songs.", "title": "" }, { "docid": "4dc89a72df7859af65b7deac167230a2", "text": "The rapid expansion of the web is causing the constant growth of information, leading to several problems such as increased difficulty of extracting potentially useful knowledge. Web content mining confronts this problem gathering explicit information from different web sites for its access and knowledge discovery. Query interfaces of web databases share common building blocks. After extracting information with parsing approach, we use a new data mining algorithm to match a large number of schemas in databases at a time. Using this algorithm increases the speed of information matching. In addition, instead of simple 1:1 matching, they do complex (m:n) matching between query interfaces. In this paper we present a novel correlation mining algorithm that matches correlated attributes with smaller cost. This algorithm uses Jaccard measure to distinguish positive and negative correlated attributes. After that, system matches the user query with different query interfaces in special domain and finally chooses the nearest query interface with user query to answer to it. Keywords—Content mining, complex matching, correlation mining, information extraction.", "title": "" }, { "docid": "7e788eb9ff8fd10582aa94a89edb10a2", "text": "This paper recasts the problem of feature location in source code as a decision-making problem in the presence of uncertainty. The solution to the problem is formulated as a combination of the opinions of different experts. The experts in this work are two existing techniques for feature location: a scenario-based probabilistic ranking of events and an information-retrieval-based technique that uses latent semantic indexing. The combination of these two experts is empirically evaluated through several case studies, which use the source code of the Mozilla Web browser and the Eclipse integrated development environment. The results show that the combination of experts significantly improves the effectiveness of feature location as compared to each of the experts used independently", "title": "" }, { "docid": "38f289b085f2c6e2d010005f096d8fd7", "text": "We present easy-to-use TensorFlow Hub sentence embedding models having good task transfer performance. Model variants allow for trade-offs between accuracy and compute resources. We report the relationship between model complexity, resources, and transfer performance. Comparisons are made with baselines without transfer learning and to baselines that incorporate word-level transfer. Transfer learning using sentence-level embeddings is shown to outperform models without transfer learning and often those that use only word-level transfer. We show good transfer task performance with minimal training data and obtain encouraging results on word embedding association tests (WEAT) of model bias.", "title": "" }, { "docid": "8fb05a1b41756ff62a3a3b987cf37b0c", "text": "This paper considers the task of locating articulated poses of multiple robots in images. Our approach simultaneously infers the number of robots in a scene, identifies joint locations and estimates sparse depth maps around joint locations. The proposed method applies staged convolutional feature detectors to 2D image inputs and computes robot instance masks using a recurrent network architecture. In addition, regression maps of most likely joint locations in pixel coordinates together with depth information are computed. Compositing 3D robot joint kinematics is accomplished by applying masks to joint readout maps. Our end-to-end formulation is in contrast to previous work in which the composition of robot joints into kinematics is performed in a separate postprocessing step. Despite the fact that our models are trained on artificial data, we demonstrate generalizability to real world images.", "title": "" }, { "docid": "ec0733962301d6024da773ad9d0f636d", "text": "This paper focuses on the design, fabrication and characterization of unimorph actuators for a microaerial flapping mechanism. PZT-5H and PZN-PT are investigated as piezoelectric layers in the unimorph actuators. Design issues for microaerial flapping actuators are discussed, and criteria for the optimal dimensions of actuators are determined. For low power consumption actuation, a square wave based electronic driving circuit is proposed. Fabricated piezoelectric unimorphs are characterized by an optical measurement system in quasi-static and dynamic mode. Experimental performance of PZT5H and PZN-PT based unimorphs is compared with desired design specifications. A 1 d.o.f. flapping mechanism with a PZT-5H unimorph is constructed, and 180◦ stroke motion at 95 Hz is achieved. Thus, it is shown that unimorphs could be promising flapping mechanism actuators.", "title": "" }, { "docid": "56b8be88bcd56ce8fd730947bb9437fc", "text": "Cross site scripting (XSS) is one of the major threats to the web application security, where the research is still underway for an effective and useful way to analyse the source code of web application and removes this threat. XSS occurs by injecting the malicious scripts into web application and it can lead to significant violations at the site or for the user. Several solutions have been recommended for their detection. However, their results do not appear to be effective enough to resolve the issue. This paper recommended a methodology for the detection of XSS from the PHP web application using genetic algorithm (GA) and static analysis. The methodology enhances the earlier approaches of determining XSS vulnerability in the web application by eliminating the infeasible paths from the control flow graph (CFG). This aids in reducing the false positive rate in the outcomes. The results of the experiments indicated that our methodology is more effectual in detecting XSS vulnerability from the PHP web application compared to the earlier studies, in terms of the false positive rates and the concrete susceptible paths determined by GA Generator. Keywords—Web Application Security; Security Vulnerability; Web Testing; Cross Site Scripting; Genetic Algorithm", "title": "" } ]
scidocsrr
a4c43a33e3dc764786144dd80184562f
The Impact of Observational Learning and Electronic Word of Mouth on Consumer Purchase Decisions: The Moderating Role of Consumer Expertise and Consumer Involvement
[ { "docid": "def650b2d565f88a6404997e9e93d34f", "text": "Quality uncertainty and high search costs for identifying relevant information from an ocean of information may prevent customers from making purchases. Recognizing potential negative impacts of this search cost for quality information and relevant information, firms began to invest in creating a virtual community that enables consumers to share their opinions and experiences to reduce quality uncertainty, and in developing recommendation systems that help customers identify goods in which they might have an interest. However, not much is known regarding the effectiveness of these efforts. In this paper, we empirically investigate the impacts of recommendations and consumer feedbacks on sales based on data gathered from Amazon.com. Our results indicate that more recommendations indeed improve sales at Amazon.com; however, consumer ratings are not found to be related to sales. On the other hand, number of consumer reviews is positively associated with sales. We also find that recommendations work better for less-popular books than for more-popular books. This is consistent with the search cost argument: a consumer’s search cost for less-popular books may be higher, and thus they may rely more on recommendations to locate a product of interest.", "title": "" }, { "docid": "ddad5569efe76dca3445e7e4d4aceafc", "text": "This study evaluates the current status of electronic word-of-mouth (eWOM) research through an exhaustive literature review of relevant articles. We have identified a total of 83 eWOM research articles published from 2001 through 2010. Published research into eWOM first appeared in peerreviewed journals about ten years ago, and research has been steadily increasing. Among research topic area, the impact of eWOM communication was the most researched topic in the last decade. We also found that individual and message were the two mostly used unit of analysis in eWOM studies. Survey, secondary data analysis, and mathematical modeling were the three main streams of research method. Finally, we found diverse theoretical approaches in understanding eWOM communication. We conclude this paper by identifying important trends in the eWOM literature to provide future research directions.", "title": "" }, { "docid": "c57cbe432fdab3f415d2c923bea905ff", "text": "Through Web-based consumer opinion platforms (e.g., epinions.com), the Internet enables customers to share their opinions on, and experiences with, goods and services with a multitude of other consumers; that is, to engage in electronic wordof-mouth (eWOM) communication. Drawing on findings from research on virtual communities and traditional word-of-mouth literature, a typology for motives of consumer online articulation is © 2004 Wiley Periodicals, Inc. and Direct Marketing Educational Foundation, Inc.", "title": "" }, { "docid": "b445de6f864c345d90162cb8b2527240", "text": "he growing popularity of online product review forums invites the development of models and metrics that allow firms to harness these new sources of information for decision support. Our work contributes in this direction by proposing a novel family of diffusion models that capture some of the unique aspects of the entertainment industry and testing their performance in the context of very early postrelease motion picture revenue forecasting. We show that the addition of online product review metrics to a benchmark model that includes prerelease marketing, theater availability and professional critic reviews substantially increases its forecasting accuracy; the forecasting accuracy of our best model outperforms that of several previously published models. In addition to its contributions in diffusion theory, our study reconciles some inconsistencies among previous studies with respect to what online review metrics are statistically significant in forecasting entertainment good sales. CHRYSANTHOS DELLAROCAS, XIAOQUAN (MICHAEL) ZHANG, AND NEVEEN F. AWAD", "title": "" } ]
[ { "docid": "ff0d24ef13efa2853befdd89ca123611", "text": "In Information Systems research there are a growing number of studies that must necessarily draw upon the contexts, experiences and narratives of practitioners. This calls for research approaches that are qualitative and may also be interpretive. These may include case studies or action research projects. For some researchers, particularly those with limited experience of interpretive qualitative research, there may be a lack of confidence when faced with the prospect of collecting and analysing the data from studies of this kind. In this paper we reflect on the lessons learned from using Grounded Theory in an interpretive case study based piece of research. The paper discusses the lessons and provides guidance for the use of the method in interpretive studies.", "title": "" }, { "docid": "af2a1083436450b9147eb7b51be5c761", "text": "Over the past century, various value models have been proposed. To determine which value model best predicts prosocial behavior, mental health, and pro-environmental behavior, we subjected seven value models to a hierarchical regression analysis. A sample of University students (N = 271) completed the Portrait Value Questionnaire (Schwartz et al., 2012), the Basic Value Survey (Gouveia et al., 2008), and the Social Value Orientation scale (Van Lange et al., 1997). Additionally, they completed the Values Survey Module (Hofstede and Minkov, 2013), Inglehart's (1977) materialism-postmaterialism items, the Study of Values, fourth edition (Allport et al., 1960; Kopelman et al., 2003), and the Rokeach (1973) Value Survey. However, because the reliability of the latter measures was low, only the PVQ-RR, the BVS, and the SVO where entered into our analysis. Our results provide empirical evidence that the PVQ-RR is the strongest predictor of all three outcome variables, explaining variance above and beyond the other two instruments in almost all cases. The BVS significantly predicted prosocial and pro-environmental behavior, while the SVO only explained variance in pro-environmental behavior.", "title": "" }, { "docid": "2620ce1c5ef543fded3a02dfb9e5c3f8", "text": "Artificial bee colony (ABC) is the one of the newest nature inspired heuristics for optimization problem. Like the chaos in real bee colony behavior, this paper proposes new ABC algorithms that use chaotic maps for parameter adaptation in order to improve the convergence characteristics and to prevent the ABC to get stuck on local solutions. This has been done by using of chaotic number generators each time a random number is needed by the classical ABC algorithm. Seven new chaotic ABC algorithms have been proposed and different chaotic maps have been analyzed in the benchmark functions. It has been detected that coupling emergent results in different areas, like those of ABC and complex dynamics, can improve the quality of results in some optimization problems. It has been also shown that, the proposed methods have somewhat increased the solution quality, that is in some cases they improved the global searching capability by escaping the local solutions. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "edbc09ea4ad9792abd9aa05176c17d42", "text": "The therapeutic nature of the nurse-patient relationship is grounded in an ethic of caring. Florence Nightingale envisioned nursing as an art and a science...a blending of humanistic, caring presence with evidence-based knowledge and exquisite skill. In this article, the author explores the caring practice of nursing as a framework for understanding moral accountability and integrity in practice. Being morally accountable and responsible for one's judgment and actions is central to the nurse's role as a moral agent. Nurses who practice with moral integrity possess a strong sense of themselves and act in ways consistent with what they understand is the right thing to do. A review of the literature related to caring theory, the concepts of moral accountability and integrity, and the documents that speak of these values and concepts in professional practice (eg, Code of Ethics for Nurses with Interpretive Statements, Nursing's Social Policy Statement) are presented in this article.", "title": "" }, { "docid": "87222f419605df6e1d63d60bd26c5343", "text": "Video Games are boring when they are too easy and frustrating when they are too hard. While most singleplayer games allow players to adjust basic difficulty (easy, medium, hard, insane), their overall level of challenge is often static in the face of individual player input. This lack of flexibility can lead to mismatches between player ability and overall game difficulty. In this paper, we explore the computational and design requirements for a dynamic difficulty adjustment system. We present a probabilistic method (drawn predominantly from Inventory Theory) for representing and reasoning about uncertainty in games. We describe the implementation of these techniques, and discuss how the resulting system can be applied to create flexible interactive experiences that adjust on the fly. Introduction Video games are designed to generate engaging experiences: suspenseful horrors, whimsical amusements, fantastic adventures. But unlike films, books, or televised media – which often have similar experiential goals – video games are interactive. Players create meaning by interacting with the game’s internal systems. One such system is inventory – the stock of items that a player collects and carries throughout the game world. The relative abundance or scarcity of inventory items has a direct impact on the player’s experience. As such, games are explicitly designed to manipulate the exchange of resources between world and player. [Simpson, 2001] This network of producer-consumer relationships can be viewed as an economy – or more broadly, as a dynamic system [Castronova, 2000, Luenberger, 79]. 1 Inventory items for “first-person shooters” include health, weapons, ammunition and power-ups like shielding or temporary invincibility. 2 A surplus of ammunition affords experimentation and “shoot first” tactics, while limited access to recovery items (like health packs) will promote a more cautious approach to threatening situations. Game developers iteratively refine these systems based on play testing feedback – tweaking behaviors and settings until the game is balanced. While balancing, they often analyze systems intuitively by tracking specific identifiable patterns or types of dynamic activity. It is a difficult and time consuming process [Rollings and Adams, 2003]. While game balancing and tuning can’t be automated, directed mathematical analysis can reveal deeper structures and relationships within a game system. With the right tools, researchers and developers can calculate relationships in less time, with greater accuracy. In this paper, we describe a first step towards such tools. Hamlet is a Dynamic Difficulty Adjustment (DDA) system built using Valve’s Half Life game engine. Using techniques drawn from Inventory Theory and Operations Research, Hamlet analyzes and adjust the supply and demand of game inventory in order to control overall game difficulty.", "title": "" }, { "docid": "a77e5f81c925e2f170df005b6576792b", "text": "Recommendation systems utilize data analysis techniques to the problem of helping users find the items they would like. Example applications include the recommendation systems for movies, books, CDs and many others. As recommendation systems emerge as an independent research area, the rating structure plays a critical role in recent studies. Among many alternatives, the collaborative filtering algorithms are generally accepted to be successful to estimate user ratings of unseen items and then to derive proper recommendations. In this paper, we extend the concept of single criterion ratings to multi-criteria ones, i.e., an item can be evaluated in many different aspects. For example, the goodness of a restaurant can be evaluated in terms of its food, decor, service and cost. Since there are usually conflicts among different criteria, the recommendation problem cannot be formulated as an optimization problem any more. Instead, we propose in this paper to use data query techniques to solve this multi-criteria recommendation problem. Empirical studies show that our approach is of both theoretical and practical values.", "title": "" }, { "docid": "c91f7b4b02faaca93fb74768c475f8bf", "text": "Data mining is an interdisciplinary subfield of computer science involving methods at the intersection of artificial intelligence, machine learning and statistics. One of the data mining tasks is anomaly detection which is the analysis of large quantities of data to identify items, events or observations which do not conform to an expected pattern. Anomaly detection is applicable in a variety of domains, e.g., fraud detection, fault detection, system health monitoring but this article focuses on application of anomaly detection in the field of network intrusion detection.The main goal of the article is to prove that an entropy-based approach is suitable to detect modern botnet-like malware based on anomalous patterns in network. This aim is achieved by realization of the following points: (i) preparation of a concept of original entropy-based network anomaly detection method, (ii) implementation of the method, (iii) preparation of original dataset, (iv) evaluation of the method.", "title": "" }, { "docid": "b2f4295cc36550bbafdb4b94f8fbee7c", "text": "Novel view synthesis aims to synthesize new images from different viewpoints of given images. Most of previous works focus on generating novel views of certain objects with a fixed background. However, for some applications, such as virtual reality or robotic manipulations, large changes in background may occur due to the egomotion of the camera. Generated images of a large-scale environment from novel views may be distorted if the structure of the environment is not considered. In this work, we propose a novel fully convolutional network, that can take advantage of the structural information explicitly by incorporating the inverse depth features. The inverse depth features are obtained from CNNs trained with sparse labeled depth values. This framework can easily fuse multiple images from different viewpoints. To fill the missing textures in the generated image, adversarial loss is applied, which can also improve the overall image quality. Our method is evaluated on the KITTI dataset. The results show that our method can generate novel views of large-scale scene without distortion. The effectiveness of our approach is demonstrated through qualitative and quantitative evaluation. . . .", "title": "" }, { "docid": "be58d822e03a3443b607c478b721095f", "text": "Cerebral amyloid angiopathy (CAA) is pathologically defined as the deposition of amyloid protein, most commonly the amyloid β peptide (Aβ), primarily within the media and adventitia of small and medium-sized arteries of the leptomeninges, cerebral and cerebellar cortex. This deposition likely reflects an imbalance between Aβ production and clearance within the brain and leads to weakening of the overall structure of brain small vessels, predisposing patients tolobar intracerebral haemorrhage (ICH), brain ischaemia and cognitive decline. CAA is associated with markers of small vessel disease, like lobar microbleeds and white matter hyperintensities on magnetic resonance imaging. Therefore, it can be now be diagnosed during life with reasonable accuracy by clinical and neuroimaging criteria. Despite the lack of a specific treatment for this condition, the detection of CAA may help in the management of patients, regarding the prevention of major haemorrhagic complications and genetic counselling. This review discusses recent advances in our understanding of the pathophysiology, detection and management of CAA.", "title": "" }, { "docid": "3d9fe9c30d09a9e66f7339b0ad24edb7", "text": "Due to progress in wired and wireless home networking, sensor networks, networked appliances, mechanical and control engineering, and computers, we can build smart homes, and many smart home projects are currently proceeding throughout the world. However, we have to be careful not to repeat the same mistake that was made with home automation technologies that were booming in the 1970s. That is, [total?] automation should not be a goal of smart home technologies. I believe the following points are important in construction of smart homes from users¿ viewpoints: development of interface technologies between humans and systems for detection of human intensions, feelings, and situations; improvement of system knowledge; and extension of human activity support outside homes to the scopes of communities, towns, and cities.", "title": "" }, { "docid": "a583b48a8eb40a9e88a5137211f15bce", "text": "The deterioration of cancellous bone structure due to aging and disease is characterized by a conversion from plate elements to rod elements. Consequently the terms \"rod-like\" and \"plate-like\" are frequently used for a subjective classification of cancellous bone. In this work a new morphometric parameter called Structure Model Index (SMI) is introduced, which makes it possible to quantify the characteristic form of a three-dimensionally described structure in terms of the amount of plates and rod composing the structure. The SMI is calculated by means of three-dimensional image analysis based on a differential analysis of the triangulated bone surface. For an ideal plate and rod structure the SMI value is 0 and 3, respectively, independent of the physical dimensions. For a structure with both plates and rods of equal thickness the value lies between 0 and 3, depending on the volume ratio of rods and plates. The SMI parameter is evaluated by examining bone biopsies from different skeletal sites. The bone samples were measured three-dimensionally with a micro-CT system. Samples with the same volume density but varying trabecular architecture can uniquely be characterized with the SMI. Furthermore the SMI values were found to correspond well with the perceived structure type.", "title": "" }, { "docid": "427b3cae516025381086021bc66f834e", "text": "PhishGuru is an embedded training system that teaches users to avoid falling for phishing attacks by delivering a training message when the user clicks on the URL in a simulated phishing email. In previous lab and real-world experiments, we validated the effectiveness of this approach. Here, we extend our previous work with a 515-participant, real-world study in which we focus on long-term retention and the effect of two training messages. We also investigate demographic factors that influence training and general phishing susceptibility. Results of this study show that (1) users trained with PhishGuru retain knowledge even after 28 days; (2) adding a second training message to reinforce the original training decreases the likelihood of people giving information to phishing websites; and (3) training does not decrease users' willingness to click on links in legitimate messages. We found no significant difference between males and females in the tendency to fall for phishing emails both before and after the training. We found that participants in the 18--25 age group were consistently more vulnerable to phishing attacks on all days of the study than older participants. Finally, our exit survey results indicate that most participants enjoyed receiving training during their normal use of email.", "title": "" }, { "docid": "d47c5f2b5fea54e0f650869d0d45ac25", "text": "Time-varying, smooth trajectory estimation is of great interest to the vision community for accurate and well behaving 3D systems. In this paper, we propose a novel principal component local regression filter acting directly on the Riemannian manifold of unit dual quaternions DH1. We use a numerically stable Lie algebra of the dual quaternions together with exp and log operators to locally linearize the 6D pose space. Unlike state of the art path smoothing methods which either operate on SO (3) of rotation matrices or the hypersphere H1 of quaternions, we treat the orientation and translation jointly on the dual quaternion quadric in the 7-dimensional real projective space RP7. We provide an outlier-robust IRLS algorithm for generic pose filtering exploiting this manifold structure. Besides our theoretical analysis, our experiments on synthetic and real data show the practical advantages of the manifold aware filtering on pose tracking and smoothing.", "title": "" }, { "docid": "697491cc059e471f0c97a840a2a9fca7", "text": "This paper presents a virtual reality (VR) simulator for four-arm disaster response robot OCTOPUS, which has high capable of both mobility and workability. OCTOPUS has 26 degrees of freedom (DOF) and is currently teleoperated by two operators, so it is quite difficult to operate OCTOPUS. Thus, we developed a VR simulator for training operation, developing operator support system and control strategy. Compared with actual robot and environment, VR simulator can reproduce them at low cost and high efficiency. The VR simulator consists of VR environment and human-machine interface such as operation-input and video- and sound-output, based on robot operation system (ROS) and Gazebo. To enhance work performance, we implement indicators and data collection functions. Four tasks such as rough terrain passing, high-step climbing, obstacle stepping over, and object transport were conducted to evaluate OCTOPUS itself and our VR simulator. The results indicate that operators could complete all the tasks but the success rate differed in tasks. Smooth and stable operations increased the work performance, but sudden change and oscillation of operation degraded it. Cooperating multi-joint adequately is quite important to execute task more efficiently.", "title": "" }, { "docid": "da536111acc1b7152f445fb7e6c14091", "text": "Nanonization is a simple and effective method to improve dissolution rate and oral bioavailability of drugs with poor water solubility. There is growing interest to downscale the nanocrystal production to enable early preclinical evaluation of new drug candidates when compound availability is scarce. The purpose of the present study was to investigate laser fragmentation to form nanosuspensions in aqueous solution of the insoluble model drug megestrol acetate (MA) using very little quantities of the drug. Laser fragmentation was obtained by focusing a femtosecond (fs) or nanosecond (ns) laser radiation on a magnetically stirred MA suspension in water or aqueous solution of a stabilizing agent. The size distribution and physicochemical properties of the drug nanoparticles were characterized, and the in vitro dissolution and in vivo oral pharmacokinetics of a laser fragmented formulation were evaluated. A MA nanosuspension was also prepared by media milling for comparison purpose. For both laser radiations, smaller particles were obtained as the laser power was increased, but at a cost of higher degradation. Significant nanonization was achieved after a 30-minfs laser treatment at 250mW and a 1-hns laser treatment at 2500mW. The degradation induced by the laser process of the drug was primarily oxidative in nature. The crystal phase of the drug was maintained, although partial loss of crystallinity was observed. The in vitro dissolution rate and in vivo bioavailability of the laser fragmented formulation were similar to those obtained with the nanosuspension prepared by media milling, and significantly improved compared to the coarse drug powder. It follows that this laser nanonization method has potential to be used for the preclinical evaluation of new drug candidates.", "title": "" }, { "docid": "752cf1c7cefa870c01053d87ff4f445c", "text": "Cannabidiol (CBD) represents a new promising drug due to a wide spectrum of pharmacological actions. In order to relate CBD clinical efficacy to its pharmacological mechanisms of action, we performed a bibliographic search on PUBMED about all clinical studies investigating the use of CBD as a treatment of psychiatric symptoms. Findings to date suggest that (a) CBD may exert antipsychotic effects in schizophrenia mainly through facilitation of endocannabinoid signalling and cannabinoid receptor type 1 antagonism; (b) CBD administration may exhibit acute anxiolytic effects in patients with generalised social anxiety disorder through modification of cerebral blood flow in specific brain sites and serotonin 1A receptor agonism; (c) CBD may reduce withdrawal symptoms and cannabis/tobacco dependence through modulation of endocannabinoid, serotoninergic and glutamatergic systems; (d) the preclinical pro-cognitive effects of CBD still lack significant results in psychiatric disorders. In conclusion, current evidences suggest that CBD has the ability to reduce psychotic, anxiety and withdrawal symptoms by means of several hypothesised pharmacological properties. However, further studies should include larger randomised controlled samples and investigate the impact of CBD on biological measures in order to correlate CBD's clinical effects to potential modifications of neurotransmitters signalling and structural and functional cerebral changes.", "title": "" }, { "docid": "45ef23f40fd4241b58b8cb0810695785", "text": "Two-wheeled wheelchairs are considered highly nonlinear and complex systems. The systems mimic a double-inverted pendulum scenario and will provide better maneuverability in confined spaces and also to reach higher level of height for pick and place tasks. The challenge resides in modeling and control of the two-wheeled wheelchair to perform comparably to a normal four-wheeled wheelchair. Most common modeling techniques have been accomplished by researchers utilizing the basic Newton's Laws of motion and some have used 3D tools to model the system where the models are much more theoretical and quite far from the practical implementation. This article is aimed at closing the gap between the conventional mathematical modeling approaches where the integrated 3D modeling approach with validation on the actual hardware implementation was conducted. To achieve this, both nonlinear and a linearized model in terms of state space model were obtained from the mathematical model of the system for analysis and, thereafter, a 3D virtual prototype of the wheelchair was developed, simulated, and analyzed. This has increased the confidence level for the proposed platform and facilitated the actual hardware implementation of the two-wheeled wheelchair. Results show that the prototype developed and tested has successfully worked within the specific requirements established.", "title": "" }, { "docid": "819753a8799135fc44dd95e478ebeaf9", "text": "Main memories are becoming sufficiently large that most OLTP databases can be stored entirely in main memory, but this may not be the best solution. OLTP workloads typically exhibit skewed access patterns where some records are hot (frequently accessed) but many records are cold (infrequently or never accessed). It is more economical to store the coldest records on secondary storage such as flash. As a first step towards managing cold data in databases optimized for main memory we investigate how to efficiently identify hot and cold data. We propose to log record accesses - possibly only a sample to reduce overhead - and perform offline analysis to estimate record access frequencies. We present four estimation algorithms based on exponential smoothing and experimentally evaluate their efficiency and accuracy. We find that exponential smoothing produces very accurate estimates, leading to higher hit rates than the best caching techniques. Our most efficient algorithm is able to analyze a log of 1B accesses in sub-second time on a workstation-class machine.", "title": "" }, { "docid": "2e42e1f9478fb2548e39a92c5bacbaab", "text": "In this paper, we consider a fully automatic makeup recommendation system and propose a novel examples-rules guided deep neural network approach. The framework consists of three stages. First, makeup-related facial traits are classified into structured coding. Second, these facial traits are fed into examples-rules guided deep neural recommendation model which makes use of the pairwise of Before-After images and the makeup artist knowledge jointly. Finally, to visualize the recommended makeup style, an automatic makeup synthesis system is developed as well. To this end, a new Before-After facial makeup database is collected and labeled manually, and the knowledge of makeup artist is modeled by knowledge base system. The performance of this framework is evaluated through extensive experimental analyses. The experiments validate the automatic facial traits classification, the recommendation effectiveness in statistical and perceptual ways and the makeup synthesis accuracy which outperforms the state of the art methods by large margin. It is also worthy to note that the proposed framework is a pioneering fully automatic makeup recommendation systems to our best knowledge.", "title": "" }, { "docid": "7723c78b2ff8f9fdc285ee05b482efef", "text": "We describe our experience in developing a discourse-annotated corpus for community-wide use. Working in the framework of Rhetorical Structure Theory, we were able to create a large annotated resource with very high consistency, using a well-defined methodology and protocol. This resource is made publicly available through the Linguistic Data Consortium to enable researchers to develop empirically grounded, discourse-specific applications.", "title": "" } ]
scidocsrr
fcbe9a04dc40f479997af388ff4cf303
A learning style classification mechanism for e-learning
[ { "docid": "9c20658d8173101492554bcf8cf89687", "text": "Students are characterized by different learning styles, focusing on different types of information and processing this information in different ways. One of the desirable characteristics of a Web-based education system is that all the students can learn despite their different learning styles. To achieve this goal we have to detect how students learn: reflecting or acting; steadily or in fits and starts; intuitively or sensitively. In this work, we evaluate Bayesian networks at detecting the learning style of a student in a Web-based education system. The Bayesian network models different aspects of a student behavior while he/she works with this system. Then, it infers his/her learning styles according to the modeled behaviors. The proposed Bayesian model was evaluated in the context of an Artificial Intelligence Web-based course. The results obtained are promising as regards the detection of students learning styles. Different levels of precision were found for the different dimensions or aspects of a learning style. 2005 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "40d7847859a974d2a91cccab55ba625b", "text": "Programming question and answer (Q&A) websites, such as Stack Overflow, leverage the knowledge and expertise of users to provide answers to technical questions. Over time, these websites turn into repositories of software engineering knowledge. Such knowledge repositories can be invaluable for gaining insight into the use of specific technologies and the trends of developer discussions. Previous work has focused on analyzing the user activities or the social interactions in Q&A websites. However, analyzing the actual textual content of these websites can help the software engineering community to better understand the thoughts and needs of developers. In the article, we present a methodology to analyze the textual content of Stack Overflow discussions. We use latent Dirichlet allocation (LDA), a statistical topic modeling technique, to automatically discover the main topics present in developer discussions. We analyze these discovered topics, as well as their relationships and trends over time, to gain insights into the development community. Our analysis allows us to make a number of interesting observations, including: the topics of interest to developers range widely from jobs to version control systems to C# syntax; questions in some topics lead to discussions in other topics; and the topics gaining the most popularity over time are web development (especially jQuery), mobile applications (especially Android), Git, and MySQL.", "title": "" }, { "docid": "b6508d1f2b73b90a0cfe6399f6b44421", "text": "An alternative to land spreading of manure effluents is to mass-culture algae on the N and P present in the manure and convert manure N and P into algal biomass. The objective of this study was to determine how the fatty acid (FA) content and composition of algae respond to changes in the type of manure, manure loading rate, and to whether the algae was grown with supplemental carbon dioxide. Algal biomass was harvested weekly from indoor laboratory-scale algal turf scrubber (ATS) units using different loading rates of raw and anaerobically digested dairy manure effluents and raw swine manure effluent. Manure loading rates corresponded to N loading rates of 0.2 to 1.3 g TN m−2 day−1 for raw swine manure effluent and 0.3 to 2.3 g TN m−2 day−1 for dairy manure effluents. In addition, algal biomass was harvested from outdoor pilot-scale ATS units using different loading rates of raw and anaerobically digested dairy manure effluents. Both indoor and outdoor units were dominated by Rhizoclonium sp. FA content values of the algal biomass ranged from 0.6 to 1.5% of dry weight and showed no consistent relationship to loading rate, type of manure, or to whether supplemental carbon dioxide was added to the systems. FA composition was remarkably consistent among samples and >90% of the FA content consisted of 14:0, 16:0, 16:1ω7, 16:1ω9, 18:0, 18:1ω9, 18:2 ω6, and 18:3ω3.", "title": "" }, { "docid": "97e33cc9da9cb944c27d93bb4c09ef3d", "text": "Synchrophasor devices guarantee situation awareness for real-time monitoring and operational visibility of the smart grid. With their widespread implementation, significant challenges have emerged, especially in communication, data quality and cybersecurity. The existing literature treats these challenges as separate problems, when in reality, they have a complex interplay. This paper conducts a comprehensive review of quality and cybersecurity challenges for synchrophasors, and identifies the interdependencies between them. It also summarizes different methods used to evaluate the dependency and surveys how quality checking methods can be used to detect potential cyberattacks. In doing so, this paper serves as a starting point for researchers entering the fields of synchrophasor data analytics and security.", "title": "" }, { "docid": "796869acd15c4c44a59b0bc139f27841", "text": "This paper presents 1-bit CMOS full adder cell using standard static CMOS logic style. The comparison is taken out using several parameters like number of transistors, delay, power dissipation and power delay product (PDP). The circuits are designed at transistor level using 180 nm and 90nm CMOS technology. Various full adders are presented in this paper like Conventional CMOS (C-CMOS), Complementary pass transistor logic FA (CPL), Double pass transistor logic FA , Transmission gate FA (TGA), Transmission function FA, New 14T,10T, Hybrid CMOS, HPSC, 24T, LPFA (CPL), LPHS, Hybrid Full Adders.", "title": "" }, { "docid": "d6c34d138692851efdbb807a89d0fcca", "text": "Vaccine hesitancy reflects concerns about the decision to vaccinate oneself or one's children. There is a broad range of factors contributing to vaccine hesitancy, including the compulsory nature of vaccines, their coincidental temporal relationships to adverse health outcomes, unfamiliarity with vaccine-preventable diseases, and lack of trust in corporations and public health agencies. Although vaccination is a norm in the U.S. and the majority of parents vaccinate their children, many do so amid concerns. The proportion of parents claiming non-medical exemptions to school immunization requirements has been increasing over the past decade. Vaccine refusal has been associated with outbreaks of invasive Haemophilus influenzae type b disease, varicella, pneumococcal disease, measles, and pertussis, resulting in the unnecessary suffering of young children and waste of limited public health resources. Vaccine hesitancy is an extremely important issue that needs to be addressed because effective control of vaccine-preventable diseases generally requires indefinite maintenance of extremely high rates of timely vaccination. The multifactorial and complex causes of vaccine hesitancy require a broad range of approaches on the individual, provider, health system, and national levels. These include standardized measurement tools to quantify and locate clustering of vaccine hesitancy and better understand issues of trust; rapid, independent, and transparent review of an enhanced and appropriately funded vaccine safety system; adequate reimbursement for vaccine risk communication in doctors' offices; and individually tailored messages for parents who have vaccine concerns, especially first-time pregnant women. The potential of vaccines to prevent illness and save lives has never been greater. Yet, that potential is directly dependent on parental acceptance of vaccines, which requires confidence in vaccines, healthcare providers who recommend and administer vaccines, and the systems to make sure vaccines are safe.", "title": "" }, { "docid": "a8688afaad32401c6827d48e25750c43", "text": "We study how to improve the accuracy and running time of top-N recommendation with collaborative filtering (CF). Unlike existing works that use mostly rated items (which is only a small fraction in a rating matrix), we propose the notion of pre-use preferences of users toward a vast amount of unrated items. Using this novel notion, we effectively identify uninteresting items that were not rated yet but are likely to receive very low ratings from users, and impute them as zero. This simple-yet-novel zero-injection method applied to a set of carefully-chosen uninteresting items not only addresses the sparsity problem by enriching a rating matrix but also completely prevents uninteresting items from being recommended as top-N items, thereby improving accuracy greatly. As our proposed idea is method-agnostic, it can be easily applied to a wide variety of popular CF methods. Through comprehensive experiments using the Movielens dataset and MyMediaLite implementation, we successfully demonstrate that our solution consistently and universally improves the accuracies of popular CF methods (e.g., item-based CF, SVD-based CF, and SVD++) by two to five orders of magnitude on average. Furthermore, our approach reduces the running time of those CF methods by 1.2 to 2.3 times when its setting produces the best accuracy. The datasets and codes that we used in experiments are available at: https://goo.gl/KUrmip.", "title": "" }, { "docid": "d4820344d9c229ac15d002b667c07084", "text": "In this paper, we propose to integrate semantic similarity assessment in an edit distance algorithm, seeking to amend similarity judgments when comparing XML-based legal documents[3].", "title": "" }, { "docid": "3f05325680ecc8c826a77961281b9748", "text": "The purpose of this paper is to determine which variables influence consumers’ intentions towards purchasing natural cosmetics. Several variables are included in the regression analysis such as age, gender, consumers’ purchase tendency towards organic food, consumers’ new natural cosmetics brands and consumers’ tendency towards health consciousness. The data was collected through an online survey questionnaire using the purposive sample of 204 consumers from the Dubrovnik-Neretva County in March and April of 2015. Various statistical analyses were used such as binary logistic regression and correlation analysis. Binary logistic regression results show that gender, consumers’ purchase tendency towards organic food and consumers’ purchase tendency towards new natural cosmetics brands have an influence on consumer purchase intentions. However, consumers’ tendency towards health consciousness has no influence on consumers’ intentions towards purchasing natural cosmetics. Results of the correlation analysis indicate that there is a strong positive correlation between purchase intentions towards natural cosmetics and consumer references of natural cosmetics. The findings may be useful to online retailers, as well as marketers and practitioners to recognize and better understand the new trends that occur in the industry of natural cosmetics.", "title": "" }, { "docid": "c5e0ba5e8ceb8c684366b4aae1a43dc2", "text": "This document proposes to make a contribution to the conceptualization and implementation of data recovery techniques through the abstraction of recovery methodologies and aspects that influence the process, relating human motivation to research needs, whether these are for the Auditing or computer science, allowing to generate classification of recovery techniques in the absence of the metadata provided by the filesystem, in this sense have been proposed to file carving techniques as a solution option. Finally, it is revealed that while many file carving techniques are being implemented in other tools, they are still in the research phase.", "title": "" }, { "docid": "c1a96dbed9373dddd0a7a07770395a7e", "text": "Mobile devices are increasingly the dominant Internet access technology. Nevertheless, high costs, data caps, and throttling are a source of widespread frustration, and a significant barrier to adoption in emerging markets. This paper presents Flywheel, an HTTP proxy service that extends the life of mobile data plans by compressing responses in-flight between origin servers and client browsers. Flywheel is integrated with the Chrome web browser and reduces the size of proxied web pages by 50% for a median user. We report measurement results from millions of users as well as experience gained during three years of operating and evolving the production", "title": "" }, { "docid": "848eee0774708928668d4896d321fe00", "text": "Machine learning is one of the most exciting recent technologies in Artificial Intelligence. Learning algorithms in many applications that&apos;s we make use of daily. Every time a web search engine like Google or Bing is used to search the internet, one of the reasons that works so well is because a learning algorithm, one implemented by Google or Microsoft, has learned how to rank web pages. Every time Facebook is used and it recognizes friends&apos; photos, that&apos;s also machine learning. Spam filters in email saves the user from having to wade through tons of spam email, that&apos;s also a learning algorithm. In this paper, a brief review and future prospect of the vast applications of machine learning has been made.", "title": "" }, { "docid": "a377b31c0cb702c058f577ca9c3c5237", "text": "Problem statement: Extensive research efforts in the area of Natural L anguage Processing (NLP) were focused on developing reading comprehens ion Question Answering systems (QA) for Latin based languages such as, English, French and German . Approach: However, little effort was directed towards the development of such systems for bidirec tional languages such as Arabic, Urdu and Farsi. In general, QA systems are more sophisticated and more complex than Search Engines (SE) because they seek a specific and somewhat exact answer to the query. Results: Existing Arabic QA system including the most recent described excluded one or both types of questions (How and Why) from their work because of the difficulty of handling these questions. In this study, we present a new approach and a new questio nanswering system (QArabPro) for reading comprehensi on texts in Arabic. The overall accuracy of our system is 84%. Conclusion/Recommendations: These results are promising compared to existing systems. Our system handles all types of questions including (How and why).", "title": "" }, { "docid": "52a3688f1474b824a6696b03a8b6536c", "text": "Credit scoring models have been widely studied in the areas of statistics, machine learning, and artificial intelligence (AI). Many novel approaches such as artificial neural networks (ANNs), rough sets, or decision trees have been proposed to increase the accuracy of credit scoring models. Since an improvement in accuracy of a fraction of a percent might translate into significant savings, a more sophisticated model should be proposed for significantly improving the accuracy of the credit scoring models. In this paper, two-stage genetic programming (2SGP) is proposed to deal with the credit scoring problem by incorporating the advantages of the IF–THEN rules and the discriminant function. On the basis of the numerical results, we can conclude that 2SGP can provide the better accuracy than other models. 2005 Published by Elsevier Inc. 0096-3003/$ see front matter 2005 Published by Elsevier Inc. doi:10.1016/j.amc.2005.05.027 * Corresponding author. Address: Institute of Management of Technology and Institute of Traffic and Transportation College of Management, National Chiao Tung University, 1001 TaHsueh Road, Hsinchu 300, Taiwan. E-mail address: u5460637@ms16.hinet.net (G.-H. Tzeng). 2 J.-J. Huang et al. / Appl. Math. Comput. xxx (2005) xxx–xxx ARTICLE IN PRESS", "title": "" }, { "docid": "893f631e0a0ca9851097bc54a14b1ea8", "text": "Thirteen subjects detected noise burst targets presented in a white noise background at a mean rate of 10/min. Within each session, local error rate, defined as the fraction of targets detected in a 33 sec moving window, fluctuated widely. Mean coherence between slow mean variations in EEG power and in local error rate was computed for each EEG frequency and performance cycle length, and was shown by a Monte Carlo procedure to be significant for many EEG frequencies and performance cycle lengths, particularly in 4 well-defined EEG frequency bands, near 3, 10, 13, and 19 Hz, and at higher frequencies in two cycle length ranges, one longer than 4 min and the other near 90 sec/cycle. The coherence phase plane contained a prominent phase reversal near 6 Hz. Sorting individual spectra by local error rate confirmed the close relation between performance and EEG power and its relative within-subject stability. These results show that attempts to maintain alertness in an auditory detection task result in concurrent minute and multi-minute scale fluctuations in performance and the EEG power spectrum.", "title": "" }, { "docid": "5686b87484f2e78da2c33ed03b1a536c", "text": "Although an automated flexible production cell is an intriguing prospect for small to median enterprises (SMEs) in current global market conditions, the complexity of programming remains one of the major hurdles preventing automation using industrial robots for SMEs. This paper provides a comprehensive review of the recent research progresses on the programming methods for industrial robots, including online programming, offline programming (OLP), and programming using Augmented Reality (AR). With the development of more powerful 3D CAD/PLM software, computer vision, sensor technology, etc. new programming methods suitable for SMEs are expected to grow in years to come. (C) 2011 Elsevier Ltd. All rights reserved.\"", "title": "" }, { "docid": "271639e9eea6a47f3d80214517444072", "text": "The treatment of juvenile idiopathic arthritis (JIA) is evolving. The growing number of effective drugs has led to successful treatment and prevention of long-term sequelae in most patients. Although patients with JIA frequently achieve lasting clinical remission, sustained remission off medication is still elusive for most. Treatment approaches vary substantially among paediatric rheumatologists owing to the inherent heterogeneity of JIA and, until recently, to the lack of accepted and well-evidenced guidelines. Furthermore, many pertinent questions related to patient management remain unanswered, in particular regarding treatment targets, and selection, intensity and sequence of initiation or withdrawal of therapy. Existing JIA guidelines and recommendations do not specify treat-to-target or tight control strategies, in contrast to adult rheumatology in which these approaches have been successful. The concepts of window of opportunity (early treatment to improve long-term outcomes) and immunological remission (abrogation of subclinical disease activity) are also fundamental when defining treatment methodologies. This Review explores the application of these concepts to JIA and their possible contribution to the development of future clinical guidelines or consensus treatment protocols. The article also discusses how diverse forms of standardized, guideline-led care and personalized treatment can be combined into a targeted, patient-centred approach to optimize management strategies for patients with JIA.", "title": "" }, { "docid": "9888ef3aefca1049307ecd49ea5a3a49", "text": "We live in a \"small world,\" where two arbitrary people are likely connected by a short chain of intermediate friends. With scant information about a target individual, people can successively forward a message along such a chain. Experimental studies have verified this property in real social networks, and theoretical models have been advanced to explain it. However, existing theoretical models have not been shown to capture behavior in real-world social networks. Here, we introduce a richer model relating geography and social-network friendship, in which the probability of befriending a particular person is inversely proportional to the number of closer people. In a large social network, we show that one-third of the friendships are independent of geography and the remainder exhibit the proposed relationship. Further, we prove analytically that short chains can be discovered in every network exhibiting the relationship.", "title": "" }, { "docid": "7af9eaf2c3bcac72049a9d4d1e6b3498", "text": "This paper proposes a fast algorithm for integrating connected-component labeling and Euler number computation. Based on graph theory, the Euler number of a binary image in the proposed algorithm is calculated by counting the occurrences of four patterns of the mask for processing foreground pixels in the first scan of a connected-component labeling process, where these four patterns can be found directly without any additional calculation; thus, connected-component labeling and Euler number computation can be integrated more efficiently. Moreover, when computing the Euler number, unlike other conventional algorithms, the proposed algorithm does not need to process background pixels. Experimental results demonstrate that the proposed algorithm is much more efficient than conventional algorithms either for calculating the Euler number alone or simultaneously calculating the Euler number and labeling connected components.", "title": "" }, { "docid": "dda739b8c4f645162313a2a691f48aa5", "text": "Classification of time series data is an important problem with applications in virtually every scientific endeavor. The large research community working on time series classification has typically used the UCR Archive to test their algorithms. In this work we argue that the availability of this resource has isolated much of the research community from the following reality, labeled time series data is often very difficult to obtain. The obvious solution to this problem is the application of semi-supervised learning; however, as we shall show, direct applications of off-the-shelf semi-supervised learning algorithms do not typically work well for time series. In this work we explain why semi-supervised learning algorithms typically fail for time series problems, and we introduce a simple but very effective fix. We demonstrate our ideas on diverse real word problems.", "title": "" }, { "docid": "65cc9459269fb23dd97ec25ffad4f041", "text": "Most of the existing literature on CRM value chain creation has focused on the effect of customer satisfaction and customer loyalty on customer profitability. In contrast, little has been studied about the CRM value creation chain at individual customer level and the role of self-construal (i.e., independent self-construal and interdependent self-construal) in such a chain. This research aims to construct the chain from customer value to organization value (i.e., customer satisfaction ? customer loyalty ? patronage behavior) and investigate the moderating effect of self-construal. To test the hypotheses suggested by our conceptual framework, we collected 846 data points from China in the context of mobile data services. The results show that customer’s self-construal can moderate the relationship chain from customer satisfaction to customer loyalty to relationship maintenance and development. This implies firms should tailor their customer strategies based on different self-construal features. 2014 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
7ec7b9d74b2aa147339e866503787244
Wireless Sensor Networks for Early Detection of Forest Fires
[ { "docid": "8e0e77e78c33225922b5a45fee9b4242", "text": "In this paper, we address the issues of maintaining sensing coverage and connectivity by keeping a minimum number of sensor nodes in the active mode in wireless sensor networks. We investigate the relationship between coverage and connectivity by solving the following two sub-problems. First, we prove that if the radio range is at least twice the sensing range, complete coverage of a convex area implies connectivity among the working set of nodes. Second, we derive, under the ideal case in which node density is sufficiently high, a set of optimality conditions under which a subset of working sensor nodes can be chosen for complete coverage. Based on the optimality conditions, we then devise a decentralized density control algorithm, Optimal Geographical Density Control (OGDC), for density control in large scale sensor networks. The OGDC algorithm is fully localized and can maintain coverage as well as connectivity, regardless of the relationship between the radio range and the sensing range. Ns-2 simulations show that OGDC outperforms existing density control algorithms [25, 26, 29] with respect to the number of working nodes needed and network lifetime (with up to 50% improvement), and achieves almost the same coverage as the algorithm with the best result.", "title": "" } ]
[ { "docid": "dbe62d1ffe794e26ac7c8418f3908f70", "text": "Numerical differentiation in noisy environment is revised through an algebraic approach. For each given order, an explicit formula yielding a pointwise derivative estimation is derived, using elementary differential algebraic operations. These expressions are composed of iterated integrals of the noisy observation signal. We show in particular that the introduction of delayed estimates affords significant improvement. An implementation in terms of a classical finite impulse response (FIR) digital filter is given. Several simulation results are presented.", "title": "" }, { "docid": "9853f157525548a35bcbe118fdefaf33", "text": "We address the task of 6D pose estimation of known rigid objects from single input images in scenarios where the objects are partly occluded. Recent RGB-D-based methods are robust to moderate degrees of occlusion. For RGB inputs, no previous method works well for partly occluded objects. Our main contribution is to present the first deep learning-based system that estimates accurate poses for partly occluded objects from RGB-D and RGB input. We achieve this with a new instance-aware pipeline that decomposes 6D object pose estimation into a sequence of simpler steps, where each step removes specific aspects of the problem. The first step localizes all known objects in the image using an instance segmentation network, and hence eliminates surrounding clutter and occluders. The second step densely maps pixels to 3D object surface positions, so called object coordinates, using an encoder-decoder network, and hence eliminates object appearance. The third, and final, step predicts the 6D pose using geometric optimization. We demonstrate that we significantly outperform the state-of-the-art for pose estimation of partly occluded objects for both RGB and RGB-D input.", "title": "" }, { "docid": "c077231164a8a58f339f80b83e5b4025", "text": "It is widely believed that refactoring improves software quality and developer productivity. However, few empirical studies quantitatively assess refactoring benefits or investigate developers' perception towards these benefits. This paper presents a field study of refactoring benefits and challenges at Microsoft through three complementary study methods: a survey, semi-structured interviews with professional software engineers, and quantitative analysis of version history data. Our survey finds that the refactoring definition in practice is not confined to a rigorous definition of semantics-preserving code transformations and that developers perceive that refactoring involves substantial cost and risks. We also report on interviews with a designated refactoring team that has led a multi-year, centralized effort on refactoring Windows. The quantitative analysis of Windows 7 version history finds that the binary modules refactored by this team experienced significant reduction in the number of inter-module dependencies and post-release defects, indicating a visible benefit of refactoring.", "title": "" }, { "docid": "3bf954a23ea3e7d5326a7b89635f966a", "text": "The particle swarm optimizer (PSO) is a stochastic, population-based optimization technique that can be applied to a wide range of problems, including neural network training. This paper presents a variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm. This is achieved by using multiple swarms to optimize different components of the solution vector cooperatively. Application of the new PSO algorithm on several benchmark optimization problems shows a marked improvement in performance over the traditional PSO.", "title": "" }, { "docid": "57bd8c0c2742027de4b599b129506154", "text": "Software instrumentation is a powerful and flexible technique for analyzing the dynamic behavior of programs. By inserting extra code in an application, it is possible to study the performance and correctness of programs and systems. Pin is a software system that performs run-time binary instrumentation of unmodified applications. Pin provides an API for writing custom instrumentation, enabling its use in a wide variety of performance analysis tasks such as workload characterization, program tracing, cache modeling, and simulation. Most of the prior work on instrumentation systems has focused on executing Unix applications, despite the ubiquity and importance of Windows applications. This paper identifies the Windows-specific obstacles for implementing a process-level instrumentation system, describes a comprehensive, robust solution, and discusses some of the alternatives. The challenges lie in managing the kernel/application transitions, injecting the runtime agent into the process, and isolating the instrumentation from the application. We examine Pin's overhead on typical Windows applications being instrumented with simple tools up to commercial program analysis products. The biggest factor affecting performance is the type of analysis performed by the tool. While the proprietary nature of Windows makes measurement and analysis difficult, Pin opens the door to understanding program behavior.", "title": "" }, { "docid": "8075cc962ce18cea46a8df4396512aa5", "text": "In the last few years, neural representation learning approaches have achieved very good performance on many natural language processing tasks, such as language modelling and machine translation. This suggests that neural models will also achieve good performance on information retrieval (IR) tasks, such as relevance ranking, addressing the query-document vocabulary mismatch problem by using a semantic rather than lexical matching. Although initial iterations of neural models do not outperform traditional lexical-matching baselines, the level of interest and effort in this area is increasing, potentially leading to a breakthrough. The popularity of the recent SIGIR 2016 workshop on Neural Information Retrieval provides evidence to the growing interest in neural models for IR. While recent tutorials have covered some aspects of deep learning for retrieval tasks, there is a significant scope for organizing a tutorial that focuses on the fundamentals of representation learning for text retrieval. The goal of this tutorial will be to introduce state-of-the-art neural embedding models and bridge the gap between these neural models with early representation learning approaches in IR (e.g., LSA). We will discuss some of the key challenges and insights in making these models work in practice, and demonstrate one of the toolsets available to researchers interested in this area.", "title": "" }, { "docid": "0110e37c5525520a4db4b1a775dacddd", "text": "This paper presents a study of Linux API usage across all applications and libraries in the Ubuntu Linux 15.04 distribution. We propose metrics for reasoning about the importance of various system APIs, including system calls, pseudo-files, and libc functions. Our metrics are designed for evaluating the relative maturity of a prototype system or compatibility layer, and this paper focuses on compatibility with Linux applications. This study uses a combination of static analysis to understand API usage and survey data to weight the relative importance of applications to end users.\n This paper yields several insights for developers and researchers, which are useful for assessing the complexity and security of Linux APIs. For example, every Ubuntu installation requires 224 system calls, 208 ioctl, fcntl, and prctl codes and hundreds of pseudo files. For each API type, a significant number of APIs are rarely used, if ever. Moreover, several security-relevant API changes, such as replacing access with faccessat, have met with slow adoption. Finally, hundreds of libc interfaces are effectively unused, yielding opportunities to improve security and efficiency by restructuring libc.", "title": "" }, { "docid": "ffd84e3418a6d1d793f36bfc2efed6be", "text": "Anterior cingulate cortex (ACC) is a part of the brain's limbic system. Classically, this region has been related to affect, on the basis of lesion studies in humans and in animals. In the late 1980s, neuroimaging research indicated that ACC was active in many studies of cognition. The findings from EEG studies of a focal area of negativity in scalp electrodes following an error response led to the idea that ACC might be the brain's error detection and correction device. In this article, these various findings are reviewed in relation to the idea that ACC is a part of a circuit involved in a form of attention that serves to regulate both cognitive and emotional processing. Neuroimaging studies showing that separate areas of ACC are involved in cognition and emotion are discussed and related to results showing that the error negativity is influenced by affect and motivation. In addition, the development of the emotional and cognitive roles of ACC are discussed, and how the success of this regulation in controlling responses might be correlated with cingulate size. Finally, some theories are considered about how the different subdivisions of ACC might interact with other cortical structures as a part of the circuits involved in the regulation of mental and emotional activity.", "title": "" }, { "docid": "c10829be320a9be6ecbc9ca751e8b56e", "text": "This article analyzes two decades of research regarding the mass media's role in shaping, perpetuating, and reducing the stigma of mental illness. It concentrates on three broad areas common in media inquiry: production, representation, and audiences. The analysis reveals that descriptions of mental illness and the mentally ill are distorted due to inaccuracies, exaggerations, or misinformation. The ill are presented not only as peculiar and different, but also as dangerous. Thus, the media perpetuate misconceptions and stigma. Especially prominent is the absence of agreed-upon definitions of \"mental illness,\" as well as the lack of research on the inter-relationships in audience studies between portrayals in the media and social perceptions. The analysis concludes with suggestions for further research on mass media's inter-relationships with mental illness.", "title": "" }, { "docid": "00c19e68020aff7fd86aa7e514cc0668", "text": "Network forensic techniques help in tracking different types of cyber attack by monitoring and inspecting network traffic. However, with the high speed and large sizes of current networks, and the sophisticated philosophy of attackers, in particular mimicking normal behaviour and/or erasing traces to avoid detection, investigating such crimes demands intelligent network forensic techniques. This paper suggests a real-time collaborative network Forensic scheme (RCNF) that can monitor and investigate cyber intrusions. The scheme includes three components of capturing and storing network data, selecting important network features using chi-square method and investigating abnormal events using a new technique called correntropy-variation. We provide a case study using the UNSW-NB15 dataset for evaluating the scheme, showing its high performance in terms of accuracy and false alarm rate compared with three recent state-of-the-art mechanisms.", "title": "" }, { "docid": "1b30c14536db1161b77258b1ce213fbb", "text": "Click-through rate (CTR) prediction and relevance ranking are two fundamental problems in web advertising. In this study, we address the problem of modeling the relationship between CTR and relevance for sponsored search. We used normalized relevance scores comparable across all queries to represent relevance when modeling with CTR, instead of directly using human judgment labels or relevance scores valid only within same query. We classified clicks by identifying their relevance quality using dwell time and session information, and compared all clicks versus selective clicks effects when modeling relevance.\n Our results showed that the cleaned click signal outperforms raw click signal and others we explored, in terms of relevance score fitting. The cleaned clicks include clicks with dwell time greater than 5 seconds and last clicks in session. Besides traditional thoughts that there is no linear relation between click and relevance, we showed that the cleaned click based CTR can be fitted well with the normalized relevance scores using a quadratic regression model. This relevance-click model could help to train ranking models using processed click feedback to complement expensive human editorial relevance labels, or better leverage relevance signals in CTR prediction.", "title": "" }, { "docid": "d1a94ed95234d9ea660b6e4779a6a694", "text": "This study aims to analyse the scientific literature on sustainability and innovation in the automotive sector in the last 13 years. The research is classified as descriptive and exploratory. The process presented 31 articles in line with the research topic in the Scopus database. The bibliometric analysis identified the most relevant articles, authors, keywords, countries, research centers and journals for the subject from 2004 to 2016 in the Industrial Engineering domain. We concluded, through the systemic analysis, that the automotive sector is well structured on the issue of sustainability and process innovation. Innovations in the sector are of the incremental process type, due to the lower risk, lower costs and less complexity. However, the literature also points out that radical innovations are needed in order to fit the prevailing environmental standards. The selected studies show that environmental practices employed in the automotive sector are: the minimization of greenhouse gas emissions, life-cycle assessment, cleaner production, reverse logistics and eco-innovation. Thus, it displays the need for empirical studies in automotive companies on the environmental practices employed and how these practices impact innovation.", "title": "" }, { "docid": "5bf0406864b500084480081d8cddcb82", "text": "Polymer scaffolds have many different functions in the field of tissue engineering. They are applied as space filling agents, as delivery vehicles for bioactive molecules, and as three-dimensional structures that organize cells and present stimuli to direct the formation of a desired tissue. Much of the success of scaffolds in these roles hinges on finding an appropriate material to address the critical physical, mass transport, and biological design variables inherent to each application. Hydrogels are an appealing scaffold material because they are structurally similar to the extracellular matrix of many tissues, can often be processed under relatively mild conditions, and may be delivered in a minimally invasive manner. Consequently, hydrogels have been utilized as scaffold materials for drug and growth factor delivery, engineering tissue replacements, and a variety of other applications.", "title": "" }, { "docid": "4a1db0cab3812817c3ebb149bd8b3021", "text": "Structural information in web text provides natural annotations for NLP problems such as word segmentation and parsing. In this paper we propose a discriminative learning algorithm to take advantage of the linguistic knowledge in large amounts of natural annotations on the Internet. It utilizes the Internet as an external corpus with massive (although slight and sparse) natural annotations, and enables a classifier to evolve on the large-scaled and real-time updated web text. With Chinese word segmentation as a case study, experiments show that the segmenter enhanced with the Chinese wikipedia achieves significant improvement on a series of testing sets from different domains, even with a single classifier and local features.", "title": "" }, { "docid": "7788cf06b7c9f09013bd15607e11cd79", "text": "Separate Cox analyses of all cause-specific hazards are the standard technique of choice to study the effect of a covariate in competing risks, but a synopsis of these results in terms of cumulative event probabilities is challenging. This difficulty has led to the development of the proportional subdistribution hazards model. If the covariate is known at baseline, the model allows for a summarizing assessment in terms of the cumulative incidence function. black Mathematically, the model also allows for including random time-dependent covariates, but practical implementation has remained unclear due to a certain risk set peculiarity. We use the intimate relationship of discrete covariates and multistate models to naturally treat time-dependent covariates within the subdistribution hazards framework. The methodology then straightforwardly translates to real-valued time-dependent covariates. As with classical survival analysis, including time-dependent covariates does not result in a model for probability functions anymore. Nevertheless, the proposed methodology provides a useful synthesis of separate cause-specific hazards analyses. We illustrate this with hospital infection data, where time-dependent covariates and competing risks are essential to the subject research question.", "title": "" }, { "docid": "f1a5a1683b6796aebb98afce2068ffff", "text": "Printed text recognition is an important problem for industrial OCR systems. Printed text is constructed in a standard procedural fashion in most settings. We develop a mathematical model for this process that can be applied to the backward inference problem of text recognition from an image. Through ablation experiments we show that this model is realistic and that a multi-task objective setting can help to stabilize estimation of its free parameters, enabling use of conventional deep learning methods. Furthermore, by directly modeling the geometric perturbations of text synthesis we show that our model can help recover missing characters from incomplete text regions, the bane of multicomponent OCR systems, enabling recognition even when the detection returns incomplete in-", "title": "" }, { "docid": "9b0114697dc6c260610d0badc1d7a2a4", "text": "This review captures the synthesis, assembly, properties, and applications of copper chalcogenide NCs, which have achieved significant research interest in the last decade due to their compositional and structural versatility. The outstanding functional properties of these materials stems from the relationship between their band structure and defect concentration, including charge carrier concentration and electronic conductivity character, which consequently affects their optoelectronic, optical, and plasmonic properties. This, combined with several metastable crystal phases and stoichiometries and the low energy of formation of defects, makes the reproducible synthesis of these materials, with tunable parameters, remarkable. Further to this, the review captures the progress of the hierarchical assembly of these NCs, which bridges the link between their discrete and collective properties. Their ubiquitous application set has cross-cut energy conversion (photovoltaics, photocatalysis, thermoelectrics), energy storage (lithium-ion batteries, hydrogen generation), emissive materials (plasmonics, LEDs, biolabelling), sensors (electrochemical, biochemical), biomedical devices (magnetic resonance imaging, X-ray computer tomography), and medical therapies (photochemothermal therapies, immunotherapy, radiotherapy, and drug delivery). The confluence of advances in the synthesis, assembly, and application of these NCs in the past decade has the potential to significantly impact society, both economically and environmentally.", "title": "" }, { "docid": "7025d357898c5997e225299f398c42f0", "text": "UNLABELLED\nAnnotating genetic variants, especially non-coding variants, for the purpose of identifying pathogenic variants remains a challenge. Combined annotation-dependent depletion (CADD) is an algorithm designed to annotate both coding and non-coding variants, and has been shown to outperform other annotation algorithms. CADD trains a linear kernel support vector machine (SVM) to differentiate evolutionarily derived, likely benign, alleles from simulated, likely deleterious, variants. However, SVMs cannot capture non-linear relationships among the features, which can limit performance. To address this issue, we have developed DANN. DANN uses the same feature set and training data as CADD to train a deep neural network (DNN). DNNs can capture non-linear relationships among features and are better suited than SVMs for problems with a large number of samples and features. We exploit Compute Unified Device Architecture-compatible graphics processing units and deep learning techniques such as dropout and momentum training to accelerate the DNN training. DANN achieves about a 19% relative reduction in the error rate and about a 14% relative increase in the area under the curve (AUC) metric over CADD's SVM methodology.\n\n\nAVAILABILITY AND IMPLEMENTATION\nAll data and source code are available at https://cbcl.ics.uci.edu/public_data/DANN/.", "title": "" }, { "docid": "7b6c039783091260cee03704ce9748d8", "text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.", "title": "" }, { "docid": "7131f6062fcb4fd1d532516499105b02", "text": "Markov influence diagrams (MIDs) are a new type of probabilistic graphical model that extends influence diagrams in the same way that Markov decision trees extend decision trees. They have been designed to build state-transition models, mainly in medicine, and perform cost-effectiveness analyses. Using a causal graph that may contain several variables per cycle, MIDs can model various patient characteristics without multiplying the number of states; in particular, they can represent the history of the patient without using tunnel states. OpenMarkov, an open-source tool, allows the decision analyst to build and evaluate MIDs-including cost-effectiveness analysis and several types of deterministic and probabilistic sensitivity analysis-with a graphical user interface, without writing any code. This way, MIDs can be used to easily build and evaluate complex models whose implementation as spreadsheets or decision trees would be cumbersome or unfeasible in practice. Furthermore, many problems that previously required discrete event simulation can be solved with MIDs; i.e., within the paradigm of state-transition models, in which many health economists feel more comfortable.", "title": "" } ]
scidocsrr
535ca445e0bf8921707453ff120bd059
Transforming Experience: The Potential of Augmented Reality and Virtual Reality for Enhancing Personal and Clinical Change
[ { "docid": "fb5a38c1dbbc7416f9b15ee19be9cc06", "text": "This study uses a body motion interactive game developed in Scratch 2.0 to enhance the body strength of children with disabilities. Scratch 2.0, using an augmented-reality function on a program platform, creates real world and virtual reality displays at the same time. This study uses a webcam integration that tracks movements and allows participants to interact physically with the project, to enhance the motivation of children with developmental disabilities to perform physical activities. This study follows a single-case research using an ABAB structure, in which A is the baseline and B is the intervention. The experimental period was 2 months. The experimental results demonstrated that the scores for 3 children with developmental disabilities increased considerably during the intervention phrases. The developmental applications of these results are also discussed.", "title": "" }, { "docid": "3da0597ce369afdec1716b1fedbce7d1", "text": "We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness.", "title": "" } ]
[ { "docid": "ae393c8f1afc39d6f4ad7ce4b5640034", "text": "Generative adversarial networks have gained a lot of attention in general computer vision community due to their capability of data generation without explicitly modelling the probability density function and robustness to overfitting. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into the training and imposing higher order consistency that is proven to be useful in many cases, such as in domain adaptation, data augmentation, and image-to-image translation. These nice properties have attracted researcher in the medical imaging community and we have seen quick adoptions in many traditional tasks and some novel applications. This trend will continue to grow based on our observation therefore we conducted a review of the recent advances in medical imaging using the adversarial training scheme in the hope of benefiting researchers that are interested in this technique.", "title": "" }, { "docid": "f38ad855c66a43529d268b81c9ea4c69", "text": "In the recent years, countless security concerns related to automotive systems were revealed either by academic research or real life attacks. While current attention was largely focused on passenger cars, due to their ubiquity, the reported bus-related vulnerabilities are applicable to all industry sectors where the same bus technology is deployed, i.e., the CAN bus. The SAE J1939 specification extends and standardizes the use of CAN to commercial vehicles where security plays an even higher role. In contrast to empirical results that attest such vulnerabilities in commercial vehicles by practical experiments, here, we determine that existing shortcomings in the SAE J1939 specifications open road to several new attacks, e.g., impersonation, denial of service (DoS), distributed DoS, etc. Taking the advantage of an industry-standard CANoe based simulation, we demonstrate attacks with potential safety critical effects that are mounted while still conforming to the SAE J1939 standard specification. We discuss countermeasures and security enhancements by including message authentication mechanisms. Finally, we evaluate and discuss the impact of employing these mechanisms on the overall network communication.", "title": "" }, { "docid": "4c8eaddb55bda61bd92b1f474e0be8b6", "text": "This article discusses varied ideas on games, learning, and digital literacy for 21st-century education as theorized and practiced by the author and James Paul Gee, and their colleagues. With attention to games as means for learning, the author links Gee’s theories to the learning sciences tradition (particularly those of the MIT Constructionists) and extending game media literacy to encompass “writing” (producing) as well as “reading” (playing) games. If game-playing is like reading and game-making is like writing, then we must introduce learners to both from a young age. The imagining and writing of web-games fosters the development of many essential skill-sets needed for creativity and innovation, providing an appealing new way for a global computing education, STEM education, for closing achievement gaps. Gee and the author reveal a shared aim to encourage researchers and theorists, as well as policymakers, to investigate gaming with regard to epistemology and cognition. DOI: 10.4018/jgcms.2010010101 2 International Journal of Gaming and Computer-Mediated Simulations, 2(1), 1-16, January-March 2010 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. ing tools; 2) videogames that teach educational content; 3) games and sims that involve modding and design as a learning environment; 4) game-making systems like GameStar Mechanics, Game Maker, Scratch; and 5) widely-used professional software programming tools like Java or Flash ActionScript. This AERA session was intended to be a field-building session—a step toward a much larger conversation about the meaning and value of various kinds of game practices and literacies. We sought to shed light on why today’s students should become game-literate, and to demonstrate a variety of possible routes that lead to game literacy. We also discussed the role of utilizing games and creating game-media in the learning and cognitive development of today’s generation of students and educators. MultiPle traDitions for initiatinG anD interPretinG GaMinG PraCtiCes for learninG Game literacy is a multidimensional combination of varied practices (e.g., reading, writing, and calculating; textual, visual, and spatial cognition; interactive design, programming, and engineering; multitasking and system understanding; meaning making, storytelling, role playing, perspective taking, and exercising judgment; etc.). Different gaming practices form a whole that has roots in both traditional literacy theories and Constructionist digital literacy. Though seemingly disparate, both traditions attempt to develop methods for describing how players/learners learn and how they construct knowledge in gaming contexts. Both traditions focus on the processes of learning rather than the product (winning the game or the actual game created by a learner/designer). Both traditions struggle with the difficulties of capturing the process of learning (an intersection of individual, context and activity over time within a situated perspective) as a unit of analysis. Despite the challenges that persist in such a dynamic and distributed object of study, educators and researchers continue to explore and refine innovative methodological approaches that capture and track learning as it flourishes within the rich environments of various gaming practices so as to inform instructional practice and design (also known as design-based research, e.g., Brown, 1996; Dede, 2005). researCh into PlayinG ViDeoGaMes The fascination with and research on the cognitive and learning processes that occurs during videogame play is becoming increasingly prominent—so much so, that a national conference dedicated entirely to this topic was launched by Dr. James Paul Gee in 2004 as a venue for scholarly discourse (Games, Learning and Society, GLS, www.glsconference. org). In this growing field of gaming research, scholars are addressing the nature of cognitive and emotional development, literacy practices, and thinking and learning during gameplay in a range of gaming environments and genres (Barab, 2009; Gee, 2003, 2007; Shaffer, 2006; Squire, 2002, 2006, 2009; Steinkuehler, 2007, 2009a, 2009b). This line of research focuses on assessing different kinds of learning while playing games released commercially for entertainment (e.g., World of Warcraft, Grand Theft Auto, Zelda, Quake, Dance Dance Revolution, Guitar Hero, Rock Band), or edutainment games (e.g., Civilization, Quest Atlantis) in various contexts (mostly out of school, in homes, clubs and afterschool programs). These scholars claim that videogame players are learning—they do not just click the controller or mouse mindlessly or move around randomly. Indeed, players are found to engage in unlocking rich storylines, employing complex problem-solving strategies and mastering the underlying systems of any given game or level. Researchers offer solid evidence that children learn important content, perspectives, and vital 21st-century skills from playing digital games (e.g., Salen, 2007; Lenhart, Kahne, Mid14 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/toward-theory-game-medialiteracy/40935?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciSelect, InfoSci-Select, InfoSci-Artificial Intelligence and Smart Computing eJournal Collection, InfoSciCommunications, Online Engagement, and Media eJournal Collection, InfoSci-Journal Disciplines Computer Science, Security, and Information Technology, InfoSci-Journal Disciplines Engineering, Natural, and Physical Science. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2", "title": "" }, { "docid": "ddf197aa8b545181ea409d0ee28b52a6", "text": "We address the problem of instance-level semantic segmentation, which aims at jointly detecting, segmenting and classifying every individual object in an image. In this context, existing methods typically propose candidate objects, usually as bounding boxes, and directly predict a binary mask within each such proposal. As a consequence, they cannot recover from errors in the object candidate generation process, such as too small or shifted boxes. In this paper, we introduce a novel object segment representation based on the distance transform of the object masks. We then design an object mask network (OMN) with a new residual-deconvolution architecture that infers such a representation and decodes it into the final binary object mask. This allows us to predict masks that go beyond the scope of the bounding boxes and are thus robust to inaccurate object candidates. We integrate our OMN into a Multitask Network Cascade framework, and learn the resulting boundary-aware instance segmentation (BAIS) network in an end-to-end manner. Our experiments on the PASCAL VOC 2012 and the Cityscapes datasets demonstrate the benefits of our approach, which outperforms the state-of-the-art in both object proposal generation and instance segmentation.", "title": "" }, { "docid": "7fc92ce3f51a0ad3e300474e23cf7401", "text": "Dependency parsers are critical components within many NLP systems. However, currently available dependency parsers each exhibit at least one of several weaknesses, including high running time, limited accuracy, vague dependency labels, and lack of nonprojectivity support. Furthermore, no commonly used parser provides additional shallow semantic interpretation, such as preposition sense disambiguation and noun compound interpretation. In this paper, we present a new dependency-tree conversion of the Penn Treebank along with its associated fine-grain dependency labels and a fast, accurate parser trained on it. We explain how a non-projective extension to shift-reduce parsing can be incorporated into non-directional easy-first parsing. The parser performs well when evaluated on the standard test section of the Penn Treebank, outperforming several popular open source dependency parsers; it is, to the best of our knowledge, the first dependency parser capable of parsing more than 75 sentences per second at over 93% accuracy.", "title": "" }, { "docid": "e0f7f087a4d8a33c1260d4ed0558edc3", "text": "In this review paper, it is intended to summarize and compare the methods of automatic detection of microcalcifications in digitized mammograms used in various stages of the Computer Aided Detection systems (CAD). In particular, the pre processing and enhancement, bilateral subtraction techniques, segmentation algorithms, feature extraction, selection and classification, classifiers, Receiver Operating Characteristic (ROC); Free-response Receiver Operating Characteristic (FROC) analysis and their performances are studied and compared.", "title": "" }, { "docid": "36a9f1c016d0e2540460e28c4c846e9a", "text": "Nowadays PDF documents have become a dominating knowledge repository for both the academia and industry largely because they are very convenient to print and exchange. However, the methods of automated structure information extraction are yet to be fully explored and the lack of effective methods hinders the information reuse of the PDF documents. To enhance the usability for PDF-formatted electronic books, we propose a novel computational framework to analyze the underlying physical structure and logical structure. The analysis is conducted at both page level and document level, including global typographies, reading order, logical elements, chapter/section hierarchy and metadata. Moreover, two characteristics of PDF-based books, i.e., style consistency in the whole book document and natural rendering order of PDF files, are fully exploited in this paper to improve the conventional image-based structure extraction methods. This paper employs the bipartite graph as a common structure for modeling various tasks, including reading order recovery, figure and caption association, and metadata extraction. Based on the graph representation, the optimal matching (OM) method is utilized to find the global optima in those tasks. Extensive benchmarking using real-world data validates the high efficiency and discrimination ability of the proposed method.", "title": "" }, { "docid": "179c5bc5044d85c2597d41b1bd5658b3", "text": "Embedding models typically associate each word with a single real-valued vector, representing its different properties. Evaluation methods, therefore, need to analyze the accuracy and completeness of these properties in embeddings. This requires fine-grained analysis of embedding subspaces. Multi-label classification is an appropriate way to do so. We propose a new evaluation method for word embeddings based on multi-label classification given a word embedding. The task we use is finegrained name typing: given a large corpus, find all types that a name can refer to based on the name embedding. Given the scale of entities in knowledge bases, we can build datasets for this task that are complementary to the current embedding evaluation datasets in: they are very large, contain fine-grained classes, and allow the direct evaluation of embeddings without confounding factors like sentence context.", "title": "" }, { "docid": "2a8f2e8e4897f03c89d9e8a6bf8270f3", "text": "BACKGROUND\nThe aging of the population is an inexorable change that challenges governments and societies in every developed country. Based on clinical and empirical data, social isolation is found to be prevalent among elderly people, and it has negative consequences on the elderly's psychological and physical health. Targeting social isolation has become a focus area for policy and practice. Evidence indicates that contemporary information and communication technologies (ICT) have the potential to prevent or reduce the social isolation of elderly people via various mechanisms.\n\n\nOBJECTIVE\nThis systematic review explored the effects of ICT interventions on reducing social isolation of the elderly.\n\n\nMETHODS\nRelevant electronic databases (PsycINFO, PubMed, MEDLINE, EBSCO, SSCI, Communication Studies: a SAGE Full-Text Collection, Communication & Mass Media Complete, Association for Computing Machinery (ACM) Digital Library, and IEEE Xplore) were systematically searched using a unified strategy to identify quantitative and qualitative studies on the effectiveness of ICT-mediated social isolation interventions for elderly people published in English between 2002 and 2015. Narrative synthesis was performed to interpret the results of the identified studies, and their quality was also appraised.\n\n\nRESULTS\nTwenty-five publications were included in the review. Four of them were evaluated as rigorous research. Most studies measured the effectiveness of ICT by measuring specific dimensions rather than social isolation in general. ICT use was consistently found to affect social support, social connectedness, and social isolation in general positively. The results for loneliness were inconclusive. Even though most were positive, some studies found a nonsignificant or negative impact. More importantly, the positive effect of ICT use on social connectedness and social support seemed to be short-term and did not last for more than six months after the intervention. The results for self-esteem and control over one's life were consistent but generally nonsignificant. ICT was found to alleviate the elderly's social isolation through four mechanisms: connecting to the outside world, gaining social support, engaging in activities of interests, and boosting self-confidence.\n\n\nCONCLUSIONS\nMore well-designed studies that contain a minimum risk of research bias are needed to draw conclusions on the effectiveness of ICT interventions for elderly people in reducing their perceived social isolation as a multidimensional concept. The results of this review suggest that ICT could be an effective tool to tackle social isolation among the elderly. However, it is not suitable for every senior alike. Future research should identify who among elderly people can most benefit from ICT use in reducing social isolation. Research on other types of ICT (eg, mobile phone-based instant messaging apps) should be conducted to promote understanding and practice of ICT-based social-isolation interventions for elderly people.", "title": "" }, { "docid": "84cf1ce60ad3eda955abc5ca0ee4fe5b", "text": "Despite its great promise, neuroimaging has yet to substantially impact clinical practice and public health. However, a developing synergy between emerging analysis techniques and data-sharing initiatives has the potential to transform the role of neuroimaging in clinical applications. We review the state of translational neuroimaging and outline an approach to developing brain signatures that can be shared, tested in multiple contexts and applied in clinical settings. The approach rests on three pillars: (i) the use of multivariate pattern-recognition techniques to develop brain signatures for clinical outcomes and relevant mental processes; (ii) assessment and optimization of their diagnostic value; and (iii) a program of broad exploration followed by increasingly rigorous assessment of generalizability across samples, research contexts and populations. Increasingly sophisticated models based on these principles will help to overcome some of the obstacles on the road from basic neuroscience to better health and will ultimately serve both basic and applied goals.", "title": "" }, { "docid": "8d5de5dd51d5000184702d91afec5c18", "text": "Deep networks trained on large-scale data can learn transferable features to promote learning multiple tasks. As deep features eventually transition from general to specific along deep networks, a fundamental problem is how to exploit the relationship across different tasks and improve the feature transferability in the task-specific layers. In this paper, we propose Deep Relationship Networks (DRN) that discover the task relationship based on novel tensor normal priors over the parameter tensors of multiple task-specific layers in deep convolutional networks. By jointly learning transferable features and task relationships, DRN is able to alleviate the dilemma of negative-transfer in the feature layers and under-transfer in the classifier layer. Extensive experiments show that DRN yields state-of-the-art results on standard multi-task learning benchmarks.", "title": "" }, { "docid": "f8f00576f55e24a06b6c930c0cc39a85", "text": "An integrated navigation information system must know continuously the current position with a good precision. The required performance of the positioning module is achieved by using a cluster of heterogeneous sensors whose measurements are fused. The most popular data fusion method for positioning problems is the extended Kalman filter. The extended Kalman filter is a variation of the Kalman filter used to solve non-linear problems. Recently, an improvement to the extended Kalman filter has been proposed, the unscented Kalman filter. This paper describes an empirical analysis evaluating the performances of the unscented Kalman filter and comparing them with the extended Kalman filter's performances.", "title": "" }, { "docid": "0734e55ef60e9e1ef490c03a23f017e8", "text": "High-voltage (HV) pulses are used in pulsed electric field (PEF) applications to provide an effective electroporation process, a process in which harmful microorganisms are disinfected when subjected to a PEF. Depending on the PEF application, different HV pulse specifications are required such as the pulse-waveform shape, the voltage magnitude, the pulse duration, and the pulse repetition rate. In this paper, a generic pulse-waveform generator (GPG) is proposed, and the GPG topology is based on half-bridge modular multilevel converter (HB-MMC) cells. The GPG topology is formed of four identical arms of series-connected HB-MMC cells forming an H-bridge. Unlike the conventional HB-MMC-based converters in HVdc transmission, the GPG load power flow is not continuous which leads to smaller size cell capacitors utilization; hence, smaller footprint of the GPG is achieved. The GPG topology flexibility allows the controller software to generate a basic multilevel waveform which can be manipulated to generate the commonly used PEF pulse waveforms. Therefore, the proposed topology offers modularity, redundancy, and scalability. The viability of the proposed GPG converter is validated by MATLAB/Simulink simulation and experimentation.", "title": "" }, { "docid": "f87e8f9d733ed60cedfda1cbfe176cbf", "text": "Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods.", "title": "" }, { "docid": "78cd033a67f703b9e50c75e418a8c8e7", "text": "Volatility in stock markets has been extensively studied in the applied finance literature. In this paper, Artificial Neural Network models based on various back propagation algorithms have been constructed to predict volatility in the Indian stock market through volatility of NIFTY returns and volatility of gold returns. This model considers India VIX, CBOE VIX, volatility of crude oil returns (CRUDESDR), volatility of DJIA returns (DJIASDR), volatility of DAX returns (DAXSDR), volatility of Hang Seng returns (HANGSDR) and volatility of Nikkei returns (NIKKEISDR) as predictor variables. Three sets of experiments have been performed over three time periods to judge the effectiveness of the approach.", "title": "" }, { "docid": "d94c7ff18e4ff21d15af109002ab2932", "text": "As the proliferation of technology dramatically infiltrates all aspects of modern life, in many ways the world is becoming so dynamic and complex that technological capabilities are overwhelming human capabilities to optimally interact with and leverage those technologies. Fortunately, these technological advancements have also driven an explosion of neuroscience research over the past several decades, presenting engineers with a remarkable opportunity to design and develop flexible and adaptive brain-based neurotechnologies that integrate with and capitalize on human capabilities and limitations to improve human-system interactions. Major forerunners of this conception are brain-computer interfaces (BCIs), which to this point have been largely focused on improving the quality of life for particular clinical populations and include, for example, applications for advanced communications with paralyzed or “locked in” patients as well as the direct control of prostheses and wheelchairs. Near-term applications are envisioned that are primarily task oriented and are targeted to avoid the most difficult obstacles to development. In the farther term, a holistic approach to BCIs will enable a broad range of task-oriented and opportunistic applications by leveraging pervasive technologies and advanced analytical approaches to sense and merge critical brain, behavioral, task, and environmental information. Communications and other applications that are envisioned to be broadly impacted by BCIs are highlighted; however, these represent just a small sample of the potential of these technologies.", "title": "" }, { "docid": "85bda0726bf53015e535738711785f20", "text": "BACKGROUND AND AIM\nThere has recently been a growing interest towards patients' affective and emotional needs, especially in relational therapies, which are considered vital as to increase the understanding of those needs and patients' well-being. In particular, we paid attention to those patients who are forced to spend the last phase of their existence in residential facilities, namely elderly people in nursing homes, who often feel marginalized, useless, depressed, unstimulated or unable to communicate. The aim of this study is to verify the effectiveness of pet therapy in improving well-being in the elderly living in a nursing home.\n\n\nMETHODS\nThis is a longitudinal study with before and after intervention variables measurement in two groups of patients of a nursing home for elderly people. One group followed an AAI intervention (experimental group) the other one did not (control group). As to perform an assessment of well-being we measured the following dimensions in patients: anxiety (HAM-A), depression (GDS), apathy (AES), loneliness (UCLA), and quality of life (QUALID). Both groups filled the questionnaires as to measure the target variables (time 0). Once finished the scheduled meetings (time 1), all the participants, including the control group, filled the same questionnaires.\n\n\nRESULTS\nIn accordance with scientific evidence the results confirmed a significant reduction of the measured variables. Especially for the quality of life, which showed a greater reduction than the other.\n\n\nCONCLUSIONS\nThe implementation and success of the Pet Therapy could have a great emotional and social impact, bringing relief to patients and their family members, but also to health professionals.", "title": "" }, { "docid": "c5efce1facffb845b175018c29fef49a", "text": "0957-4174/$ see front matter 2013 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2013.02.007 ⇑ Corresponding author. Tel.: +3", "title": "" }, { "docid": "f2b3643ca7a9a1759f038f15847d7617", "text": "Despite significant advances in image segmentation techniques, evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. Little effort has been spent on the design of perceptually correct measures to compare an automatic segmentation of an image to a set of hand-segmented examples of the same image. This paper demonstrates how a modification of the Rand index, the Normalized Probabilistic Rand (NPR) index, meets the requirements of largescale performance evaluation of image segmentation. We show that the measure has a clear probabilistic interpretation as the maximum likelihood estimator of an underlying Gibbs model, can be correctly normalized to account for the inherent similarity in a set of ground truth images, and can be computed efficiently for large datasets. Results are presented on images from the publicly available Berkeley Segmentation dataset.", "title": "" }, { "docid": "937bb3c066500ddffe8d3d78b3580c26", "text": "Multimodal semantic representation is an evolving area of research in natural language processing as well as computer vision. Combining or integrating perceptual information, such as visual features, with linguistic features is recently being actively studied. This paper presents a novel bimodal autoencoder model for multimodal representation learning: the autoencoder learns in order to enhance linguistic feature vectors by incorporating the corresponding visual features. During the runtime, owing to the trained neural network, visually enhanced multimodal representations can be achieved even for words for which direct visual-linguistic correspondences are not learned. The empirical results obtained with standard semantic relatedness tasks demonstrate that our approach is generally promising. We further investigate the potential efficacy of the enhanced word embeddings in discriminating antonyms and synonyms from vaguely related words.", "title": "" } ]
scidocsrr
0dc3aa5f48a2e06564af96b84f9de9d9
Motion planning for autonomous driving with a conformal spatiotemporal lattice
[ { "docid": "0ea6f71a52592e2fff6e428610554299", "text": "In this paper we describe a novel and simple to implement yet effective lattice design algorithm, which simultaneously produces input and state-space sampled lattice graphs. The presented method is an extension to the ideas suggested by Bicchi et al. on input lattices and is applicable to systems which can be brought into (2,n) chained form, such as kinematic models of unicycles, bicycles, differential-drive robots and car-like vehicles (pulling several trailers). We further show that a transformation from chained form to path coordinates allows the resulting lattice to be bent along any C1 continuous path. We exploit this fact by shaping it along the skeleton of arbitrary structured environments, such as the center of road lanes and corridors. In our experiments in both structured (i.e. on-road) and unstructured (i.e. parking lot) scenarios, we successfully demonstrate for the first time the applicability of lattice-based planning approaches to search queries in arbitrary environments.", "title": "" } ]
[ { "docid": "a8af37df01ad45139589e82bd81deb61", "text": "As technology use continues to rise, especially among young individuals, there are concerns that excessive use of technology may impact academic performance. Researchers have started to investigate the possible negative effects of technology use on college academic performance, but results have been mixed. The following study seeks to expand upon previous studies by exploring the relationship among the use of a wide variety of technology forms and an objective measure of academic performance (GPA) using a 7-day time diary data collection method. The current study also seeks to examine both underclassmen and upperclassmen to see if these groups differ in how they use technology. Upperclassmen spent significantly more time using technology for academic and workrelated purposes, whereas underclassmen spent significantly more time using cell phones, online chatting, and social networking sites. Significant negative correlations with GPA emerged for television, online gaming, adult site, and total technology use categories. Keyword: Technology use, academic performance, post-secondary education.", "title": "" }, { "docid": "a435814e2af70acf985068a17f23845b", "text": "Dropout is a simple yet effective algorithm for regularizing neural networks by randomly dropping out units through Bernoulli multiplicative noise, and for some restricted problem classes, such as linear or logistic regression, several theoretical studies have demonstrated the equivalence between dropout and a fully deterministic optimization problem with data-dependent Tikhonov regularization. This work presents a theoretical analysis of dropout for matrix factorization, where Bernoulli random variables are used to drop a factor, thereby attempting to control the size of the factorization. While recent work has demonstrated the empirical effectiveness of dropout for matrix factorization, a theoretical understanding of the regularization properties of dropout in this context remains elusive. This work demonstrates the equivalence between dropout and a fully deterministic model for matrix factorization in which the factors are regularized by the sum of the product of the norms of the columns. While the resulting regularizer is closely related to a variational form of the nuclear norm, suggesting that dropout may limit the size of the factorization, we show that it is possible to trivially lower the objective value by doubling the size of the factorization. We show that this problem is caused by the use of a fixed dropout rate, which motivates the use of a rate that increases with the size of the factorization. Synthetic experiments validate our theoretical findings.", "title": "" }, { "docid": "6f1b0c25402fb28f6cdba558751451ca", "text": "The problem of nonnegative blind source separation (NBSS) is addressed in this paper, where both the sources and the mixing matrix are nonnegative. Because many real-world signals are sparse, we deal with NBSS by sparse component analysis. First, a determinant-based sparseness measure, named D-measure, is introduced to gauge the temporal and spatial sparseness of signals. Based on this measure, a new NBSS model is derived, and an iterative sparseness maximization (ISM) approach is proposed to solve this model. In the ISM approach, the NBSS problem can be cast into row-to-row optimizations with respect to the unmixing matrix, and then the quadratic programming (QP) technique is used to optimize each row. Furthermore, we analyze the source identifiability and the computational complexity of the proposed ISM-QP method. The new method requires relatively weak conditions on the sources and the mixing matrix, has high computational efficiency, and is easy to implement. Simulation results demonstrate the effectiveness of our method.", "title": "" }, { "docid": "fb31665935c1a0964e70c864af8ff46f", "text": "In the context of object and scene recognition, state-of-the-art performances are obtained with visual Bag-of-Words (BoW) models of mid-level representations computed from dense sampled local descriptors (e.g., Scale-Invariant Feature Transform (SIFT)). Several methods to combine low-level features and to set mid-level parameters have been evaluated recently for image classification. In this chapter, we study in detail the different components of the BoW model in the context of image classification. Particularly, we focus on the coding and pooling steps and investigate the impact of the main parameters of the BoW pipeline. We show that an adequate combination of several low (sampling rate, multiscale) and mid-level (codebook size, normalization) parameters is decisive to reach good performances. Based on this analysis, we propose a merging scheme that exploits the specificities of edge-based descriptors. Low and high contrast regions are pooled separately and combined to provide a powerful representation of images. We study the impact on classification performance of the contrast threshold that determines whether a SIFT descriptor corresponds to a low contrast region or a high contrast region. Successful experiments are provided on the Caltech-101 and Scene-15 datasets. M. T. Law (B) · N. Thome · M. Cord LIP6, UPMC—Sorbonne University, Paris, France e-mail: Marc.Law@lip6.fr N. Thome e-mail: Nicolas.Thome@lip6.fr M. Cord e-mail: Matthieu.Cord@lip6.fr B. Ionescu et al. (eds.), Fusion in Computer Vision, Advances in Computer 29 Vision and Pattern Recognition, DOI: 10.1007/978-3-319-05696-8_2, © Springer International Publishing Switzerland 2014", "title": "" }, { "docid": "a1ed4c514380fb0d7b7083fb1cee520d", "text": "We show two important findings on the use of deep convolutional neural networks (CNN) in medical image analysis. First, we show that CNN models that are pre-trained using computer vision databases (e.g., Imagenet) are useful in medical image applications, despite the significant differences in image appearance. Second, we show that multiview classification is possible without the pre-registration of the input images. Rather, we use the high-level features produced by the CNNs trained in each view separately. Focusing on the classification of mammograms using craniocaudal (CC) and mediolateral oblique (MLO) views and their respective mass and micro-calcification segmentations of the same breast, we initially train a separate CNN model for each view and each segmentation map using an Imagenet pre-trained model. Then, using the features learned from each segmentation map and unregistered views, we train a final CNN classifier that estimates the patient’s risk of developing breast cancer using the Breast Imaging-Reporting and Data System (BI-RADS) score. We test our methodology in two publicly available datasets (InBreast and DDSM), containing hundreds of cases, and show that it produces a volume under ROC surface of over 0.9 and an area under ROC curve (for a 2-class problem benign and malignant) of over 0.9. In general, our approach shows state-of-the-art classification results and demonstrates a new comprehensive way of addressing this challenging classification problem.", "title": "" }, { "docid": "425cf4dceac465543820e2ff212e90df", "text": "Auto-enucleation is a sign of untreated psychosis. We describe two patients who presented with attempted auto-enucleation while being incarcerated. This is an observation two-case series of two young men who suffered untreated psychosis while being incarcerated. These young men showed severe self-inflicted ocular trauma during episodes of untreated psychosis. Injuries included orbital bone fracture and dehiscence of the lateral rectus in one patient and severe retinal hemorrhage and partial optic nerve avulsion in the second patient. Auto-enucleation is a severe symptom of untreated psychosis. This urgent finding can occur in a jail setting in which psychiatric care may be minimal.", "title": "" }, { "docid": "619a699d6e848ff692a581dc40a86a10", "text": "Intelligent Transportation System (ITS) is a significant part of smart city, and short-term traffic flow prediction plays an important role in intelligent transportation management and route guidance. A number of models and algorithms based on time series prediction and machine learning were applied to short-term traffic flow prediction and achieved good results. However, most of the models require the length of the input historical data to be predefined and static, which cannot automatically determine the optimal time lags. To overcome this shortage, a model called Long Short-Term Memory Recurrent Neural Network (LSTM RNN) is proposed in this paper, which takes advantages of the three multiplicative units in the memory block to determine the optimal time lags dynamically. The dataset from Caltrans Performance Measurement System (PeMS) is used for building the model and comparing LSTM RNN with several well-known models, such as random walk(RW), support vector machine(SVM), single layer feed forward neural network(FFNN) and stacked autoencoder(SAE). The results show that the proposed prediction model achieves higher accuracy and generalizes well.", "title": "" }, { "docid": "d3fd8c1ce41892f54aedff187f4872c2", "text": "In the first year of the TREC Micro Blog track, our participation has focused on building from scratch an IR system based on the Whoosh IR library. Though the design of our system (CipCipPy) is pretty standard it includes three ad-hoc solutions for the track: (i) a dedicated indexing function for hashtags that automatically recognizes the distinct words composing an hashtag, (ii) expansion of tweets based on the title of any referred Web page, and (iii) a tweet ranking function that ranks tweets in results by their content quality, which is compared against a reference corpus of Reuters news. In this preliminary paper we describe all the components of our system, and the efficacy scored by our runs. The CipCipPy system is available under a GPL license.", "title": "" }, { "docid": "a4f5fcd7aab7d1d48f462f680336c905", "text": "The authors experienced a case with ocular ischemia with hypotony following injection of a dermal filler for augmentation rhinoplasty. Immediately after injection, the patient demonstrated a permanent visual loss with typical fundus features of central retinal artery occlusion. Multiple crusted ulcerative patches around the nose and left periorbit developed, and the left eye became severely inflamed, ophthalmoplegic, and hypotonic. Signs of anterior and posterior segment ischemia were observed including severe cornea edema, iris atrophy, and chorioretinal swelling. The retrograde arterial embolization of hyaluronic acid gel from vascular branches of nasal tip to central retinal artery and long posterior ciliary artery was highly suspicious. After 6 months of follow up, skin lesions and eyeball movement became normalized, but progressive exudative and tractional retinal detachment was causing phthisis bulbi.", "title": "" }, { "docid": "ff59e8662a2bde7d5b7bc76e6f310b16", "text": "This study examines the expectations that workers have regarding enterprise social media (ESM). Using interviews with 58 employees at an organization implementing an ESM platform, we compare workers’ views of the technology with those of existing workplace communication technologies and publicly available social media. We find individuals’ frames regarding expectations and assumptions of social media are established through activities outside work settings and influence employees’ views about the usefulness of ESM. Differences in technological frames regarding ESM were related to workers’ age and level of personal social media use, but in directions contrary to expectations expressed in the literature. Findings emphasize how interpretations of technology may shift over time and across contexts in unique ways for different individuals.", "title": "" }, { "docid": "59d194764511b1ad2ce0ca5d858fab21", "text": "Humanoid robot path finding is one of the core-technologies in robot research domain. This paper presents an approach to finding a path for robot motion by fusing images taken by the NAO's camera and proximity information delivered by sonar sensors. The NAO robot takes an image around its surroundings, uses the fuzzy color extractor to segment its potential path colors, and selects a fitting line as path by the least squares method. Therefore, the NAO robot is able to perform the automatic navigation according to the selected path. As a result, the experiments are conducted to navigate the NAO robot to walk to a given destination and to grasp a box. In addition, the NAO robot uses its sonar sensors to detect a barrier and helps pick up the box with its hands.", "title": "" }, { "docid": "8ef7838ec34920af4e73f85c221d47b7", "text": "Cluster analysis is an important tool in many scientific disciplines, and many clustering methods are available (see e.g. Everitt (1974) or Jain and Dubes (1988)). A single clustering method or algorithm cannot solve all the possible clustering problems, hence the proliferation of many techniques. Most clustering methods are plagued ~ith the problem of noisy data, i.e., characterization o f good clusters amongst noisy data. In some cases, even a few noisy points or outliers affect the outcome of the method by severely biasing the algorithm. The noise that is just due to the statistical distribution o f the measuring instrument is usually of no concern. On the other hand, the completely arbitrary noise points that just do not belong to the pattern or class being searched for are o f real concern. A good example o f that is in image processing, where one is searching for certain shapes, for instance, amongst all the edge elements detected. An approach that is frequently recommended (for example, Jain and Dubes (1988)) is where one tries to identify such data and removes it before application o f the clustering algorithms. In many cases, however, that may not be possible or it may be extremely difficult. In this paper, a class of algorithms based on the square-error clustering (a sub-class o f partitional clustering) is considered. The performance of the algorithms o f this kind is highly susceptible to outliers or noisy points. The K-meaus type algorithms is one example where each point in the data-set must be assimaed to one o f the dusters. Because of this requirement, even the noise points have to be allotted to one o f the good clusters, and that would deteriorate the performance o f the algorithm. One approach to solve this problem is as proposed by Jolion and Rosenfeld (1989), where each data point is # y en a weight proportional to the density o f data points in its vicinity, thus assigning higher weights to the points belonging to the clusters, while assigning lower weights to the noise or background points. Thus the approach results in preprocessing o f the data in order to reduce the bias due to noise background. The", "title": "" }, { "docid": "a0b5183ad30c21b3085da64ee108ed06", "text": "This paper discusses design and control of a prismatic series elastic actuator with high mechanical power output in a small and lightweight form factor. We introduce a design that pushes the performance boundary of electric series elastic actuators by using high motor voltage coupled with an efficient drivetrain to enable large continuous actuator force while retaining speed. Compact size is achieved through the use of a novel piston-style ball screw support mechanism and a concentrically placed compliant element. We develop controllers for force and position tracking based on combinations of PID, model-based, and disturbance observer control structures. Finally, we demonstrate our actuator's performance with a series of experiments designed to operate the actuator at the limits of its mechanical and control capability.", "title": "" }, { "docid": "b0def34ea13c4b561a54bd71c8c9ec96", "text": "This paper describes an algorithm about online gait trajectory generation method, controller for walking, brief introduction of humanoid robot platform KHR-3 (KAIST Humanoid Robot-3: HUBO) and experimental result. The gait trajectory has continuity, smoothness in varying walking period and stride, and it has simple mathematical form which can be implemented easily. It is tested on the robot with some control algorithms. The gait trajectory algorithm is composed of two kinds of function trajectory. The first one is cycloid function, which is used for ankle position in Cartesian coordinate space. Because this profile is made by superposition of linear and sinusoidal function, it has a property of slow start, fast moving, and slow stop. This characteristics can reduce the over burden at instantaneous high speed motion of the actuator. The second one is 3rd order polynomial function. It is continuous in the defined time interval, easy to use when the boundary condition is well defined, and has standard values of coefficients when the time scale is normalized. Position and velocity values are used for its boundary condition. Controllers mainly use F/T(Force/Torque) sensor at the ankle of the robot as a sensor data, and modify the input position profiles (in joint angle space and Cartesian coordinate space). They are to reduce unexpected external forces such as landing shock, and vibration induced by compliances of the sensors and reduction gears, because they can affect seriously on the walking stability. This trajectory and control algorithm is now on the implementing stage for the free-walking realization of KHR-3. As a first stage of realization, we realized the marking time and forward walking algorithm with variable frequency and stride", "title": "" }, { "docid": "f31669e97fc655e74e8bb8324031060b", "text": "Being an emerging paradigm for display advertising, RealTime Bidding (RTB) drives the focus of the bidding strategy from context to users’ interest by computing a bid for each impression in real time. The data mining work and particularly the bidding strategy development becomes crucial in this performance-driven business. However, researchers in computational advertising area have been suffering from lack of publicly available benchmark datasets, which are essential to compare different algorithms and systems. Fortunately, a leading Chinese advertising technology company iPinYou decided to release the dataset used in its global RTB algorithm competition in 2013. The dataset includes logs of ad auctions, bids, impressions, clicks, and final conversions. These logs reflect the market environment as well as form a complete path of users’ responses from advertisers’ perspective. This dataset directly supports the experiments of some important research problems such as bid optimisation and CTR estimation. To the best of our knowledge, this is the first publicly available dataset on RTB display advertising. Thus, they are valuable for reproducible research and understanding the whole RTB ecosystem. In this paper, we first provide the detailed statistical analysis of this dataset. Then we introduce the research problem of bid optimisation in RTB and the simple yet comprehensive evaluation protocol. Besides, a series of benchmark experiments are also conducted, including both click-through rate (CTR) estimation and bid optimisation.", "title": "" }, { "docid": "45940a48b86645041726120fb066a1fa", "text": "For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.", "title": "" }, { "docid": "5975b9bc4086561262d458e48b384172", "text": "Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.", "title": "" }, { "docid": "8e23ef656b501814fc44c609feebe823", "text": "This paper proposes an approach for segmentation and semantic labeling of RGBD data based on the joint usage of geometrical clues and deep learning techniques. An initial oversegmentation is performed using spectral clustering and a set of NURBS surfaces is then fitted on the extracted segments. The input data are then fed to a Convolutional Neural Network (CNN) together with surface fitting parameters. The network is made of nine convolutional stages followed by a softmax classifier and produces a per-pixel descriptor vector for each sample. An iterative merging procedure is then used to recombine the segments into the regions corresponding to the various objects and surfaces. The couples of adjacent segments with higher similarity according to the CNN features are considered for merging and the NURBS surface fitting accuracy is used in order to understand if the selected couples correspond to a single surface. By combining the obtained segmentation with the descriptors from the CNN a set of labeled segments is obtained. The comparison with state-of-the-art methods shows how the proposed method provides an accurate and reliable scene segmentation and labeling.", "title": "" }, { "docid": "bf305e88c6f2878c424eca1223a02a8d", "text": "The first plausible scheme of fully homomorphic encryption (FHE), introduced by Gentry in 2009, was considered a major breakthrough in the field of information security. FHE allows the evaluation of arbitrary functions directly on encrypted data on untrusted servers. However, previous implementations of FHE on general-purpose processors had very long latency, which makes it impractical for cloud computing. The most computationally intensive components in the Gentry-Halevi FHE primitives are the large-number modular multiplications and additions. In this paper, we attempt to use customized circuits to speedup the large number multiplication. Strassen's algorithm is employed in the design of an efficient, high-speed large-number multiplier. In particular, we propose an architecture design of an 768K-bit multiplier. As a key compoment, an 64K-point finite-field fast Fourier transform (FFT) processor is designed and prototyped on the Stratix-V FPGA. At 100 MHz, the FPGA implementation is about twice as fast as the same FFT algorithm executed on the NVIDA C2050 GPU which has 448 cores running at 1.15 GHz but at much lower power consumption.", "title": "" }, { "docid": "a41dfbce4138a8422bc7ddfac830e557", "text": "This paper is the second part in a series that provides a comprehensive survey of the problems and techniques of tracking maneuvering targets in the absence of the so-called measurement-origin uncertainty. It surveys motion models of ballistic targets used for target tracking. Models for all three phases (i.e., boost, coast, and reentry) of motion are covered.", "title": "" } ]
scidocsrr
a57d62a7e1eab77506440bedd7651e99
Generating Consistent Land Surface Temperature and Emissivity Products Between ASTER and MODIS Data for Earth Science Research
[ { "docid": "8085eb4cf8a5e9eb6f506c475b4500ba", "text": "The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) scanner on NASA’s Earth Observing System (EOS)-AM1 satellite (launch scheduled for 1998) will collect five bands of thermal infrared (TIR) data with a noise equivalent temperature difference ( NE T ) of 0.3 K to estimate surface temperatures and emissivity spectra, especially over land, where emissivities are not known in advance. Temperature/emissivity separation (TES) is difficult because there are five measurements but six unknowns. Various approaches have been used to constrain the extra degree of freedom. ASTER’s TES algorithm hybridizes three established algorithms, first estimating the normalized emissivities and then calculating emissivity band ratios. An empirical relationship predicts the minimum emissivity from the spectral contrast of the ratioed values, permitting recovery of the emissivity spectrum. TES uses an iterative approach to remove reflected sky irradiance. Based on numerical simulation, TES should be able to recover temperatures within about 1.5 K and emissivities within about 0.015. Validation using airborne simulator images taken over playas and ponds in central Nevada demonstrates that, with proper atmospheric compensation, it is possible to meet the theoretical expectations. The main sources of uncertainty in the output temperature and emissivity images are the empirical relationship between emissivity values and spectral contrast, compensation for reflected sky irradiance, and ASTER’s precision, calibration, and atmospheric compensation.", "title": "" } ]
[ { "docid": "9faec965b145160ee7f74b80a6c2d291", "text": "Several skin substitutes are available that can be used in the management of hand burns; some are intended as temporary covers to expedite healing of shallow burns and others are intended to be used in the surgical management of deep burns. An understanding of skin biology and the relative benefits of each product are needed to determine the optimal role of these products in hand burn management.", "title": "" }, { "docid": "2c9f7053d9bcd6bc421b133dd7e62d08", "text": "Recurrent neural networks (RNN) combined with attention mechanism has proved to be useful for various NLP tasks including machine translation, sequence labeling and syntactic parsing. The attention mechanism is usually applied by estimating the weights (or importance) of inputs and taking the weighted sum of inputs as derived features. Although such features have demonstrated their effectiveness, they may fail to capture the sequence information due to the simple weighted sum being used to produce them. The order of the words does matter to the meaning or the structure of the sentences, especially for syntactic parsing, which aims to recover the structure from a sequence of words. In this study, we propose an RNN-based attention to capture the relevant and sequence-preserved features from a sentence, and use the derived features to perform the dependency parsing. We evaluated the graph-based and transition-based parsing models enhanced with the RNN-based sequence-preserved attention on the both English PTB and Chinese CTB datasets. The experimental results show that the enhanced systems were improved with significant increase in parsing accuracy.", "title": "" }, { "docid": "6cba2e960c0c4f3999ce400d93e42bac", "text": "Phylodiversity measures summarise the phylogenetic diversity patterns of groups of organisms. By using branches of the tree of life, rather than its tips (e.g., species), phylodiversity measures provide important additional information about biodiversity that can improve conservation policy and outcomes. As a biodiverse nation with a strong legislative and policy framework, Australia provides an opportunity to use phylogenetic information to inform conservation decision-making. We explored the application of phylodiversity measures across Australia with a focus on two highly biodiverse regions, the south west of Western Australia (SWWA) and the South East Queensland bioregion (SEQ). We analysed seven diverse groups of organisms spanning five separate phyla on the evolutionary tree of life, the plant genera Acacia and Daviesia, mammals, hylid frogs, myobatrachid frogs, passerine birds, and camaenid land snails. We measured species richness, weighted species endemism (WE) and two phylodiversity measures, phylogenetic diversity (PD) and phylogenetic endemism (PE), as well as their respective complementarity scores (a measure of gains and losses) at 20 km resolution. Higher PD was identified within SEQ for all fauna groups, whereas more PD was found in SWWA for both plant groups. PD and PD complementarity were strongly correlated with species richness and species complementarity for most groups but less so for plants. PD and PE were found to complement traditional species-based measures for all groups studied: PD and PE follow similar spatial patterns to richness and WE, but highlighted different areas that would not be identified by conventional species-based biodiversity analyses alone. The application of phylodiversity measures, particularly the novel weighted complementary measures considered here, in conservation can enhance protection of the evolutionary history that contributes to present day biodiversity values of areas. Phylogenetic measures in conservation can include important elements of biodiversity in conservation planning, such as evolutionary potential and feature diversity that will improve decision-making and lead to better biodiversity conservation outcomes.", "title": "" }, { "docid": "061ac4487fba7837f44293a2d20b8dd9", "text": "This paper describes a model of cooperative behavior and describes how such a model can be applied in a natural language understanding system. We assume that agents attempt to recognize the plans of other agents and, then, use this plan when deciding what response to make. In particular, we show that, given a setting in which purposeful dialogues occur, this model can account for responses that provide more information that explicitly requested and for appropriate responses to both short sentence fragments and indirect speech acts.", "title": "" }, { "docid": "1f3e600ce5be2a55234c11e19e11cb67", "text": "In this paper, we propose a noise robust speech recognition system built using generalized distillation framework. It is assumed that during training, in addition to the training data, some kind of ”privileged” information is available and can be used to guide the training process. This allows to obtain a system which at test time outperforms those built on regular training data alone. In the case of noisy speech recognition task, the privileged information is obtained from a model, called ”teacher”, trained on clean speech only. The regular model, called ”student”, is trained on noisy utterances and uses teacher’s output for the corresponding clean utterances. Thus, for this framework a parallel clean/noisy speech data are required. We experimented on the Aurora2 database which provides such kind of data. Our system uses hybrid DNN-HMM acoustic model where neural networks provide HMM state probabilities during decoding. The teacher DNN is trained on the clean data, while the student DNN is trained using multi-condition (various SNRs) data. The student DNN loss function combines the targets obtained from forced alignment of the training data and the outputs of the teacher DNN when fed with the corresponding clean features. Experimental results clearly show that distillation framework is effective and allows to achieve significant reduction in the word error rate.", "title": "" }, { "docid": "79453a45e1376e1d4cd08002b5e61ac0", "text": "Appropriate selection of learning algorithms is essential for the success of data mining. Meta-learning is one approach to achieve this objective by identifying a mapping from data characteristics to algorithm performance. Appropriate data characterization is, thus, of vital importance for the meta-learning. To this effect, a variety of data characterization techniques, based on three strategies including simple measure, statistical measure and information theory based measure, have been developed, however, the quality of them is still needed to be improved. This paper presents new measures to characterise datasets for meta-learning based on the idea to capture the characteristics from the structural shape and size of the decision tree induced from the dataset. Their effectiveness is illustrated by comparing to the results obtained by the classical data characteristics techniques, including DCT that is the most wide used technique in meta-learning and Landmarking that is the most recently developed method and produced better performance comparing to DCT.", "title": "" }, { "docid": "8971e1e9bc14663c8ae50d2640140f33", "text": "Designing for reflection is becoming of increasing interest to HCI researchers, especially as digital technologies move to supporting broader professional and quality of life issues. However, the term 'reflection' is being used and designed for in diverse ways and often with little reference to vast amount of literature on the topic outside of HCI. Here we synthesize this literature into a framework, consisting of aspects such as purposes of reflection, conditions for reflection and levels of reflection (where the levels capture the behaviours and activities associated with reflection). We then show how technologies can support these different aspects and conclude with open questions that can guide a more systematic approach to how we understand and design for support of reflection.", "title": "" }, { "docid": "94a59f1c20a6476035a00d86c222a08b", "text": "Lateral transshipments within an inventory system are stock movements between locations of the same echelon. These transshipments can be conducted periodically at predetermined points in time to proactively redistribute stock, or they can be used reactively as a method of meeting demand which cannot be satisfied from stock on hand. The elements of an inventory system considered, e.g. size, cost structures and service level definition, all influence the best method of transshipping. Models of many different systems have been considered. This paper provides a literature review which categorizes the research to date on lateral transshipments, so that these differences can be understood and gaps within the literature can be identified.", "title": "" }, { "docid": "ff705a36e71e2aa898e99fbcfc9ec9d2", "text": "This paper presents a design concept for smart home automation system based on the idea of the internet of things (IoT) technology. The proposed system has two scenarios where first one is denoted as a wireless based and the second is a wire-line based scenario. Each scenario has two operational modes for manual and automatic use. In Case of the wireless scenario, Arduino-Uno single board microcontroller as a central controller for home appliances is applied. Cellular phone with Matlab-GUI platform for monitoring and controlling processes through Wi-Fi communication technology is addressed. For the wire-line scenario, field-programmable gate array (FPGA) kit as a main controller is used. Simulation and hardware realization for the proposed system show its reliability and effectiveness.", "title": "" }, { "docid": "d86633f3add015ffc7de96cb4a6e3802", "text": "Summary • Animator and model checker for B Methode • Model & constrained based checker • ProB findes correct values for operation arguments • ProB enables user to uncover errors in specifications", "title": "" }, { "docid": "7190e8e6f6c061bed8589719b7d59e0d", "text": "Image-level feature descriptors obtained from convolutional neural networks have shown powerful representation capabilities for image retrieval. In this paper, we present an unsupervised method to aggregate deep convolutional features into compact yet discriminative image vectors by simulating the dynamics of heat diffusion. A distinctive problem in image retrieval is that repetitive or bursty features tend to dominate feature representations, leading to less than ideal matches. We show that by considering each deep feature as a heat source, our unsupervised aggregation method is able to avoiding over-representation of bursty features. We additionally provide a practical solution for the proposed aggregation method, and further show the efficiency of our method in experimental evaluation. Finally, we extensively evaluate the proposed approach with pre-trained and fine-tuned deep networks on common public benchmarks, and show superior performance compared to previous work. Image retrieval has always been an attractive research topic in the field of computer vision. By allowing users to search similar images from a large database of digital images, it provides a natural and flexible interface for image archiving and browsing. Convolutional Neural Networks (CNNs) have shown remarkable accuracy in tasks such as image classification, and object detection. Recent research has also shown positive results of using CNNs on image retrieval (Babenko and Lempitsky 2015; Kalantidis, Mellina, and Osindero 2016; Hoang et al. 2017). However, unlike image classification approaches which often use global feature vectors produced by fully connected layers, these methods extract local features depicting image patches from the outputs of convolutional layers and aggregate these features into compact (a few hundred dimensions) image-level descriptors. Once meaningful and representative image-level descriptors are defined, visually similar images are retrieved by computing similarities between pre-computed database feature representations and query representations. In this paper we devise a method to avoid overrepresenting bursty features. Inspired by an observation of similar phenomena in textual data, Jegou et al. (Jégou, Douze, and Schmid 2009) identified burstiness as the phenomenon by which overly repetitive features within an instance tend to dominate the instance feature representation. In order to alleviate this issue, we propose a feature aggregation approach that emulates the dynamics of heat diffusion. The idea is to model feature maps as a heat system where we weight highly the features leading to low system temperatures. This is because that these features are less connected to other features, and therefore they are more distinctive. The dynamics of the temperature in such system can be estimated using the partial differential equation induced by the heat equation. Heat diffusion, and more specifically anisotropic diffusion, has been used successfully in various image processing and computer vision tasks. Ranging from the classical work of Perona and Malik (Perona and Malik 1990) to further applications in image smoothing, image regularization, image co-segmentation, and optical flow estimation (Zhang, Zheng, and Cai 2010; Tschumperle and Deriche 2005; Kim et al. 2011; Bruhn, Weickert, and Schnörr 2005). However, to our knowledge, it has not been applied to weight features from the outputs of a deep convolutional neural network. We show that by combining this classical image processing technique with a deep learning model, we are able to obtain significant gains against previous work. Our contributions can be summarized as follows: • By greedily considering each deep feature as a heat source and enforcing the temperature of the system be a constant within each heat source, we propose a novel efficient feature weighting approach to reduce the undesirable influence of bursty features. • We provide a practical solution to computing weights for our feature weighting method. Additionally, we conduct extensive quantitative evaluations on commonly used image retrieval benchmarks, and demonstrate substantial performance improvement over existing unsupervised methods for feature aggregation.", "title": "" }, { "docid": "04c367bfe113af139c30e167f393acec", "text": "A novel planar magic-T using an E-plane substrate integrate waveguide (SIW) power divider and a SIW-slotline transition is proposed in this letter. Due to the metal ground between the two input/output ports, the E-plane SIW power divider has a 180° reverse phase characteristic. A SIW-slotline transition is utilized to realize the H-plane input/output port of the magic-T. Good agreement between the measured and simulated results indicate that the planar magic-T has a fractional bandwidth (FBW) of 18% (13.2-15.8 GHz), and the amplitude and phase imbalances are less than 0.24 dB and 1.5°, respectively.", "title": "" }, { "docid": "f5cb684cfff16812bafd83286a51b71f", "text": "OBJECTIVES\nTo assess the factors, motivations, and nonacademic influences that affected the choice of major among pharmacy and nonpharmacy undergraduate students.\n\n\nMETHODS\nA survey was administered to 618 pharmacy and nonpharmacy majors to assess background and motivational factors that may have influenced their choice of major. The sample consisted of freshman and sophomore students enrolled in a required speech course.\n\n\nRESULTS\nAfrican-American and Hispanic students were less likely to choose pharmacy as a major than Caucasians, whereas Asian-Americans were more likely to choose pharmacy as a major. Pharmacy students were more likely to be interested in science and math than nonpharmacy students.\n\n\nCONCLUSION\nStudents' self-reported racial/ethnic backgrounds influence their decision of whether to choose pharmacy as their academic major. Results of this survey provide further insight into developing effective recruiting strategies and enhancing the marketing efforts of academic institutions.", "title": "" }, { "docid": "b6c85badcc58249dffbbd3cebf2edd75", "text": "INTRODUCTION\nWith the continued expansion of robotically assisted procedures, general surgery residents continue to receive more exposure to this new technology as part of their training. There are currently no guidelines or standardized training requirements for robot-assisted procedures during general surgical residency. The aim of this study was to assess the effect of this new technology on general surgery training from the residents' perspective.\n\n\nMETHODS\nAn anonymous, national, web-based survey was conducted on residents enrolled in general surgery training in 2013. The survey was sent to 240 Accreditation Council for Graduate Medical Education-approved general surgery training programs.\n\n\nRESULTS\nOverall, 64% of the responding residents were men and had an average age of 29 years. Half of the responses were from postgraduate year 1 (PGY1) and PGY2 residents, and the remainder was from the PGY3 level and above. Overall, 50% of the responses were from university training programs, 32% from university-affiliated programs, and 18% from community-based programs. More than 96% of residents noted the availability of the surgical robot system at their training institution. Overall, 63% of residents indicated that they had participated in robotic surgical cases. Most responded that they had assisted in 10 or fewer robotic cases with the most frequent activities being assisting with robotic trocar placement and docking and undocking the robot. Only 18% reported experience with operating the robotic console. More senior residents (PGY3 and above) were involved in robotic cases compared with junior residents (78% vs 48%, p < 0.001). Overall, 60% of residents indicated that they received no prior education or training before their first robotic case. Approximately 64% of residents reported that formal training in robotic surgery was important in residency training and 46% of residents indicated that robotic-assisted cases interfered with resident learning. Only 11% felt that robotic-assisted cases would replace conventional laparoscopic surgery in the future.\n\n\nCONCLUSIONS\nThis study illustrates that although the most residents have a robot at their institution and have participated in robotic surgery cases, very few residents received formal training before participating in a robotic case.", "title": "" }, { "docid": "4b96679173c825db7bc334449b6c4b83", "text": "This article provides the first survey of computational models of emotion in reinforcement learning (RL) agents. The survey focuses on agent/robot emotions, and mostly ignores human user emotions. Emotions are recognized as functional in decision-making by influencing motivation and action selection. Therefore, computational emotion models are usually grounded in the agent’s decision making architecture, of which RL is an important subclass. Studying emotions in RL-based agents is useful for three research fields. For machine learning (ML) researchers, emotion models may improve learning efficiency. For the interactive ML and human–robot interaction community, emotions can communicate state and enhance user investment. Lastly, it allows affective modelling researchers to investigate their emotion theories in a successful AI agent class. This survey provides background on emotion theory and RL. It systematically addresses (1) from what underlying dimensions (e.g. homeostasis, appraisal) emotions can be derived and how these can be modelled in RL-agents, (2) what types of emotions have been derived from these dimensions, and (3) how these emotions may either influence the learning efficiency of the agent or be useful as social signals. We also systematically compare evaluation criteria, and draw connections to important RL sub-domains like (intrinsic) motivation and model-based RL. In short, this survey provides both a practical overview for engineers wanting to implement emotions in their RL agents, and identifies challenges and directions for future emotion-RL research.", "title": "" }, { "docid": "0e679dfd2ff8ced7c1391486d4329253", "text": "A significant portion of information needs in web search target entities. These may come in different forms or flavours, ranging from short keyword queries to more verbose requests, expressed in natural language. We address the task of automatically annotating queries with target types from an ontology. The identified types can subsequently be used, e.g., for creating semantically more informed query and retrieval models, filtering results, or directing the requests to specific verticals. Our study makes the following contributions. First, we formalise the task of hierarchical target type identification, argue that it is best viewed as a ranking problem, and propose multiple evaluation metrics. Second, we develop a purpose-built test collection by hand-annotating over 300 queries, from various recent entity search benchmarking campaigns, with target types from the DBpedia ontology. Finally, we introduce and examine two baseline models, inspired by federated search techniques. We show that these methods perform surprisingly well when target types are limited to a flat list of top level categories; finding the right level of granularity in the hierarchy, however, is particularly challenging and requires further investigation.", "title": "" }, { "docid": "2e0f71364c4733c90d463579916f122c", "text": "The History of HCI is briefly reviewed together with three HCI models and structure including CSCW, CSCL and CSCR. It is shown that a number of authorities consider HCI to be a fragmented discipline with no agreed set of unifying design principles. An analysis of usability criteria based upon citation frequency of authors is performed in order to discover the eight most recognised HCI principles.", "title": "" }, { "docid": "8c6622b02eb7e4e11ec684d860456056", "text": "It is the purpose of this viewpoint article to delineate the regulatory network of growth hormone (GH), insulin, and insulin-like growth factor-1 (IGF-1) signalling during puberty, associated hormonal changes in adrenal and gonadal androgen metabolism, and the impact of dietary factors and smoking involved in the pathogenesis of acne. The key regulator IGF-1 rises during puberty by the action of increased GH secretion and correlates well with the clinical course of acne. In acne patients, associations between serum levels of IGF-1, dehydroepiandrosterone sulphate, dihydrotestosterone, acne lesion counts and facial sebum secretion rate have been reported. IGF-1 stimulates 5alpha-reductase, adrenal and gonadal androgen synthesis, androgen receptor signal transduction, sebocyte proliferation and lipogenesis. Milk consumption results in a significant increase in insulin and IGF-1 serum levels comparable with high glycaemic food. Insulin induces hepatic IGF-1 secretion, and both hormones amplify the stimulatory effect of GH on sebocytes and augment mitogenic downstream signalling pathways of insulin receptors, IGF-1 receptor and fibroblast growth factor receptor-2b. Acne is proposed to be an IGF-1-mediated disease, modified by diets and smoking increasing insulin/IGF1-signalling. Metformin treatment, and diets low in milk protein content and glycaemic index reduce increased IGF-1 signalling. Persistent acne in adulthood with high IGF-1 levels may be considered as an indicator for increased risk of cancer, which may require appropriate dietary intervention as well as treatment with insulin-sensitizing agents.", "title": "" }, { "docid": "8b3557219674c8441e63e9b0ab459c29", "text": "his paper is focused on comparison of various decision tree classification algorithms using WEKA tool. Data mining tools such as classification, clustering, association and neural network solve large amount of problem. These are all open source tools, we directly communicate with each tool or by java code. In this paper we discuss on classification technique of data mining. In classification, various techniques are present such as bayes, functions, lazy, rules and tree etc. . Decision tree is one of the most frequently used classification algorithm. Decision tree classification with Waikato Environment for Knowledge Analysis (WEKA) is the simplest way to mining information from huge database. This work shows the process of WEKA analysis of file converts, step by step process of weka execution, selection of attributes to be mined and comparison with Knowledge Extraction of Evolutionary Learning . I took database [1] and execute in weka software. The conclusion of the paper shows the comparison among all type of decision tree algorithms by weka tool.", "title": "" }, { "docid": "b76af76207fa3ef07e8f2fbe6436dca0", "text": "Face recognition applications for airport security and surveillance can benefit from the collaborative coupling of mobile and cloud computing as they become widely available today. This paper discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The challenge lies with how to perform task partitioning from mobile devices to cloud and distribute compute load among cloud servers (cloudlet) to minimize the response time given diverse communication latencies and server compute powers. Our preliminary simulation results show that optimal task partitioning algorithms significantly affect response time with heterogeneous latencies and compute powers. Motivated by these results, we design, implement, and validate the basic functionalities of MOCHA as a proof-of-concept, and develop algorithms that minimize the overall response time for face recognition. Our experimental results demonstrate that high-powered cloudlets are technically feasible and indeed help reduce overall processing time when face recognition applications run on mobile devices using the cloud as the backend servers.", "title": "" } ]
scidocsrr
59cc199bb0ea8754535d4f11829e3c84
Progger: An Efficient, Tamper-Evident Kernel-Space Logger for Cloud Data Provenance Tracking
[ { "docid": "c6bc52a8fc4e9e99d1c3165934b82352", "text": "Audit logs are an important part of any secure system, and they need to be carefully designed in order to give a faithful representation of past system activity. This is especially true in the presence of adversaries who might want to tamper with the audit logs. While it is important that auditors can inspect audit logs to assess past system activity, the content of an audit log may contain sensitive information, and should therefore be protected from unauthorized", "title": "" } ]
[ { "docid": "d977a769528fc2ffd9b622a1a1e9f0d4", "text": "This chapter is to provide a tutorial and pointers to results and related work on timed automata with a focus on semantical and algorithmic aspects of verification tools. We present the concrete and abstract semantics of timed automata (based on transition rules, regions and zones), decision problems, and algorithms for verification. A detailed description on DBM (Difference Bound Matrices) is included, which is the central data structure behind several verification tools for timed systems. As an example, we give a brief introduction to the tool UPPAAL.", "title": "" }, { "docid": "aca08ddd20ac74311b24ae0e74019e46", "text": "This paper presents a system architecture for load management in smart buildings which enables autonomous demand side load management in the smart grid. Being of a layered structure composed of three main modules for admission control, load balancing, and demand response management, this architecture can encapsulate the system functionality, assure the interoperability between various components, allow the integration of different energy sources, and ease maintenance and upgrading. Hence it is capable of handling autonomous energy consumption management for systems with heterogeneous dynamics in multiple time-scales and allows seamless integration of diverse techniques for online operation control, optimal scheduling, and dynamic pricing. The design of a home energy manager based on this architecture is illustrated and the simulation results with Matlab/Simulink confirm the viability and efficiency of the proposed framework.", "title": "" }, { "docid": "9de7a85c931319e7ab9db5cfb0c5eee8", "text": "versus concrete thinking Whereas in individualistic cultures brands are made by adding values or abstract personality traits to products, members of collectivistic cultures are more interested in concrete product features than in abstract brands because they are less used to conceptual thinking. for members of collectivistic cultures where context and situation are important, the brand concept is too abstract to be discussed the way members of individualistic cultures do. The Reader’s Digest Trusted Brands survey in 2002 asked people in 18 different countries in europe about the probability of buying unknown brands. The responses ‘extremely/quite likely to consider buying a brand which I’ve heard of but haven’t tried before’ correlated significantly with individualism (r = 0.82***).2 Instead of adding abstract personal characteristics to the product, in collectivistic cultures the brand is linked to concrete persons, in Japan called talents (Praet 2001). Whereas American companies have developed product brands with unique characteristics, Japanese companies have generally emphasised the corporate brand. In essence, this means inspiring trust among consumers in a company and so persuading them to buy its products. As a result, Japanese and Korean companies, in their television advertisements, display corporate identity logos more frequently than do uS and German companies (Souiden et al. 2006). The unfamiliarity with abstract brand associations leads to variation when measuring brand equity of global brands across cultures. An important element of brand equity is consumer equity, which is measured in 2 for correlation analysis, the Pearson product-moment correlation coefficient is used. Correlation analysis is one-tailed. Significance levels are indicated by *p < 0.05, **p < 0.01 and ***p < 0.005. Regression analysis is stepwise. The coefficient of determination or R2 is the indicator of the percentage of variance explained.", "title": "" }, { "docid": "2e9f6ac770ddeb9bbc50d9c55b4131f9", "text": "IEEE 802.15.4 standard for Low Power Wireless Personal Area Networks (LoWPANs) is emerging as a promising technology to bring envisioned ubiquitous paragon, into realization. Considerable efforts are being carried on to integrate LoWPANs with other wired and wireless IP networks, in order to make use of pervasive nature and existing infrastructure associated with IP technologies. Designing a security solution becomes a challenging task as this involves threats from wireless domain of resource constrained devices as well as from extremely mature IP domain. In this paper we have i) identified security threats and requirements for LoWPANs ii) analyzed current security solutions and identified their shortcomings, iii) proposed a generic security framework that can be modified according to application requirements to provide desired level of security. We have also given example implementation scenario of our proposed framework for resource and security critical applications.", "title": "" }, { "docid": "2a3f37db1663c926be1effd5c1061d0a", "text": "The Intrusion Detection System (IDS) generates huge amounts of alerts that are mostly false positives. The abundance of false positive alerts makes it difficu lt for the security analyst to identify successful attacks and to take remedial actions. Such alerts to have not b een classified in accordance with their degree of t hreats. They further need to be processed to ascertain the most serious alerts and the time of the reaction re sponse. They may take a long time and considerable space to discuss thoroughly. Each IDS generates a huge amount of alerts where most of them are real while t e others are not (i.e., false alert) or are redun dant alerts. The false alerts create a serious problem f or intrusion detection systems. Alerts are defined based on source/destination IP and source/destination ports. However, one cannot know which of those IP/ports b ing a threat to the network. The IDSs’ alerts are not c lassified depending on their degree of the threat. It is difficult for the security analyst to identify atta cks and take remedial action for this threat. So it is necessary to assist in categorizing the degree of the threat, by using data mining techniques. The proposed fram ework for proposal is IDS Alert Reduction and Assessment Based on Data Mining (ARADMF). The proposed framework contains three systems: Traffic data retr ieval and collection mechanism system, reduction ID S alert processes system and threat score process of IDS alert system. The traffic data retrieval and co llection mechanism systems develops a mechanism to save IDS alerts, extract the standard features as intrusion detection message exchange format and save them in DB file (CSV-type). It contains the Intrusion Detection Message Exchange Format (IDMEF) which wor ks as procurement alerts and field reduction is used as data standardization to make the format of lert as standard as possible. As for Feature Extra ction (FE) system, it is designed to extract the features of alert by using a gain information algorithm, wh ich gives a rank for every feature to facilitate the selectio n of the feature with the highest rank. The main fu ction of reduction IDS alert processes system is to remove duplicate IDS alerts and reduces the a mount of false alerts based on a new aggregation algorithm. It con sists of three phases. The first phase removes redu ndant alerts. The second phase reduces false alerts based on threshold time value and the last phase reduces false alerts based on rules with a threshold common vulne rabilities and exposure value. Threat score process of IDS alert system is characterized by using a propos ed adaptive Apriori algorithm, which has been modif ie to work with multi features, i.e., items and automa ted classification of alerts according to their thr eat's scores. The expected result of his proposed will be decreasing the number of false positive alert with rate expected 90% and increasing the level of accuracy c ompared with other approaches. The reasons behind using ARADMF are to reduce the false IDS alerts and to assess them to examine the threat score of IDS alert, that is will be effort to increase the effic iency and accuracy of network security.", "title": "" }, { "docid": "5ce2a7346327e263afe3af2f28b4ba43", "text": "Currently there are no internationally accepted methodologies to evaluate and compare the performance of land administration systems. This is partly because land administration systems are in constant reform, and probably more importantly, they represent societies' different perceptions of land. This paper describes the development of a framework to measure and compare the performance of land administration systems. The research is of particular relevance since it develops a management model which links the operational aspects of land administration with land policy.", "title": "" }, { "docid": "9f58c2c2a9675d868abb4e0a5a299def", "text": "This paper presents the design of new high-frequency transformer isolated bidirectional dc-dc converter modules connected in input-series-output-parallel (ISOP) for 20-kVA-solid-state transformer. The ISOP modular structure enables the use of low-voltage MOSFETs, featuring low on-state resistance and resulted conduction losses, to address medium-voltage input. A phase-shift dual-half-bridge (DHB) converter is employed to achieve high-frequency galvanic isolation, bidirectional power flow, and zero voltage switching (ZVS) of all switching devices, which leads to low switching losses even with high-frequency operation. Furthermore, an adaptive inductor is proposed as the main energy transfer element of a phase-shift DHB converter so that the circulating energy can be optimized to maintain ZVS at light load and minimize the conduction losses at heavy load as well. As a result, high efficiency over wide load range and high power density can be achieved. In addition, current stress of switching devices can be reduced. A planar transformer adopting printed-circuit-board windings arranged in an interleaved structure is designed to obtain low core and winding loss, solid isolation, and identical parameters in multiple modules. Moreover, the modular structure along with a distributed control provides plug-and-play capability and possible high-level fault tolerance. The experimental results on 1 kW DHB converter modules switching at 50 kHz are presented to validate the theoretical analysis.", "title": "" }, { "docid": "59ddabc255d07fe6b8fb13082c8dd62d", "text": "Mambo is a full-system simulator for modeling PowerPC-based systems. It provides building blocks for creating simulators that range from purely functional to timing-accurate. Functional versions support fast emulation of individual PowerPC instructions and the devices necessary for executing operating systems. Timing-accurate versions add the ability to account for device timing delays, and support the modeling of the PowerPC processor microarchitecture. We describe our experience in implementing the simulator and its uses within IBM to model future systems, support early software development, and design new system software.", "title": "" }, { "docid": "3d11b4b645a32ff0d269fc299e7cf646", "text": "The static one-to-one binding of hosts to IP addresses allows adversaries to conduct thorough reconnaissance in order to discover and enumerate network assets. Specifically, this fixed address mapping allows distributed network scanners to aggregate information gathered at multiple locations over different times in order to construct an accurate and persistent view of the network. The unvarying nature of this view enables adversaries to collaboratively share and reuse their collected reconnaissance information in various stages of attack planning and execution. This paper presents a novel moving target defense (MTD) technique which enables host-to-IP binding of each destination host to vary randomly across the network based on the source identity (spatial randomization) as well as time (temporal randomization). This spatio-temporal randomization will distort attackers' view of the network by causing the collected reconnaissance information to expire as adversaries transition from one host to another or if they stay long enough in one location. Consequently, adversaries are forced to re-scan the network frequently at each location or over different time intervals. These recurring probings significantly raises the bar for the adversaries by slowing down the attack progress, while improving its detectability. We introduce three novel metrics for quantifying the effectiveness of MTD defense techniques: deterrence, deception, and detectability. Using these metrics, we perform rigorous theoretical and experimental analysis to evaluate the efficacy of this approach. These analyses show that our approach is effective in countering a significant number of sophisticated threat models including collaborative reconnaissance, worm propagation, and advanced persistent threat (APT), in an evasion-free manner.", "title": "" }, { "docid": "4c82ba56d6532ddc57c2a2978de7fe5a", "text": "This paper presents a Model Reference Adaptive System (MRAS) based speed sensorless estimation of vector controlled Induction Motor Drive. MRAS based techniques are one of the best methods to estimate the rotor speed due to its performance and straightforward stability approach. Depending on the type of tuning signal driving the adaptation mechanism, MRAS estimators are classified into rotor flux based MRAS, back e.m.f based MRAS, reactive power based MRAS and artificial neural network based MRAS. In this paper, the performance of the rotor flux based MRAS for estimating the rotor speed was studied. Overview on the IM mathematical model is briefly summarized to establish a physical basis for the sensorless scheme used. Further, the theoretical basis of indirect field oriented vector control is explained in detail and it is implemented in MATLAB/SIMULINK.", "title": "" }, { "docid": "4a7a4db8497b0d13c8411100dab1b207", "text": "A novel and simple resolver-to-dc converter is presented. It is shown that by appropriate processing of the sine and cosine resolver signals, the proposed converter may produce an output voltage proportional to the shaft angle. A dedicated compensation method is applied to produce an almost perfectly linear output. This enables determination of the angle with reasonable accuracy without a processor and/or a look-up table. The tests carried out under various operating conditions are satisfactory and in good agreement with theory. This paper gives the theoretical analysis, the computer simulation, the full circuit details, and experimental results of the proposed scheme.", "title": "" }, { "docid": "37de72b0e9064d09fb6901b40d695c0a", "text": "BACKGROUND AND OBJECTIVES\nVery little is known about the use of probiotics among pregnant women with gestational diabetes mellitus (GDM) especially its effect on oxidative stress and inflammatory indices. The aim of present study was to measure the effect of a probiotic supplement capsule on inflammation and oxidative stress biomarkers in women with newly-diagnosed GDM.\n\n\nMETHODS AND STUDY DESIGN\n64 pregnant women with GDM were enrolled in a double-blind placebo controlled randomized clinical trial in the spring and summer of 2014. They were randomly assigned to receive either a probiotic containing four bacterial strains of Lactobacillus acidophilus LA-5, Bifidobacterium BB-12, Streptococcus Thermophilus STY-31 and Lactobacillus delbrueckii bulgaricus LBY-27 or placebo capsule for 8 consecutive weeks. Blood samples were taken pre- and post-treatment and serum indices of inflammation and oxidative stress were assayed. The measured mean response scales were then analyzed using mixed effects model. All statistical analysis was performed using Statistical Package for Social Sciences (SPSS) software (version 16).\n\n\nRESULTS\nSerum high-sensitivity C-reactive protein and tumor necrosis factor-α levels improved in the probiotic group to a statistically significant level over the placebo group. Serum interleukin-6 levels decreased in both groups after intervention; however, neither within group nor between group differences interleukin-6 serum levels was statistically significant. Malondialdehyde, glutathione reductase and erythrocyte glutathione peroxidase levels improved significantly with the use of probiotics when compared with the placebo.\n\n\nCONCLUSIONS\nThe probiotic supplement containing L.acidophilus LA- 5, Bifidobacterium BB- 12, S.thermophilus STY-31 and L.delbrueckii bulgaricus LBY-2 appears to improve several inflammation and oxidative stress biomarkers in women with GDM.", "title": "" }, { "docid": "73e0dce829387fdd5d601977ac530020", "text": "It is widely agreed that there is a need to excite more school students about computing. Considering teachers' views about student engagement is important to securing their support for any solution. We therefore present the results of a qualitative, questionnaire-based study on teachers' perceptions of the best ways to make the subject interesting. From 115 responses by UK computing teachers emerged a range of themes about the issues they felt were most important. We found that whilst their views reflected a range of approaches that are widely promoted in the literature and in national initiatives, there were also disconnects between teachers' views and wider discourses. Based on the results, we give specific recommendations for areas where more should be done to support teachers in making computing interesting to school students. Academics should do more to engage with teachers, especially if they wish to introduce deep computing principles in schools. Teachers expressed an interest in computing clubs in schools, but a strong support network for them is still needed. This may be an opportunity for businesses and universities to help support teachers.", "title": "" }, { "docid": "88e7cbdb4704320cd40b2e0b566c0e42", "text": "UNLABELLED\nSince 2009, catfish farming in the southeastern United States has been severely impacted by a highly virulent and clonal population of Aeromonas hydrophila causing motile Aeromonas septicemia (MAS) in catfish. The possible origin of this newly emerged highly virulent A. hydrophila strain is unknown. In this study, we show using whole-genome sequencing and comparative genomics that A. hydrophila isolates from diseased grass carp in China and catfish in the United States have highly similar genomes. Our phylogenomic analyses suggest that U.S. catfish isolates emerged from A. hydrophila populations of Asian origin. Furthermore, we identified an A. hydrophila strain isolated in 2004 from a diseased catfish in Mississippi, prior to the onset of the major epidemic outbreaks in Alabama starting in 2009, with genomic characteristics that are intermediate between those of the Asian and Alabama fish isolates. Investigation of A. hydrophila strain virulence demonstrated that the isolate from the U.S. catfish epidemic is significantly more virulent to both channel catfish and grass carp than is the Chinese carp isolate. This study implicates the importation of fish or fishery products into the United States as the source of highly virulent A. hydrophila that has caused severe epidemic outbreaks in United States-farmed catfish and further demonstrates the potential for invasive animal species to disseminate bacterial pathogens worldwide.\n\n\nIMPORTANCE\nCatfish aquaculture farming in the southeastern United States has been severely affected by the emergence of virulent Aeromonas hydrophila responsible for epidemic disease outbreaks, resulting in the death of over 10 million pounds of catfish. Because the origin of this newly emerged A. hydrophila strain is unknown, this study used a comparative genomics approach to conduct a phylogenomic analysis of A. hydrophila isolates obtained from the United States and Asia. Our results suggest that the virulent isolates from United States-farmed catfish have a recent common ancestor with A. hydrophila isolates from diseased Asian carp. We have also observed that an Asian carp isolate, like recent U.S. catfish isolates, is virulent in catfish. The results from this study suggest that the highly virulent U.S. epidemic isolates emerged from an Asian source and provide another example of the threat that invasive species pose in the dissemination of bacterial pathogens.", "title": "" }, { "docid": "7430fbf8020f31777337738d187e54a4", "text": "Data mining is process of identify the knowledge from large data set. Knowledge discovery from textual database is a process of extracting interested or non retrival pattern from unstructured text document. With rapid growing of information increasing trends in people to extract knowledge from large text document. A text mining frame work contain preprocess on text and techniques used to retrieve information like classification, clustering, summarization, information extraction, and visualization. . There are several text classification techniques are review in this review paper such as SVM, Naïve bayes, KNN, Association rule, and decision tree classifier. Which categorized the text data in to pre define class. In this review paper we study deferent techniques of text mining to extracting relevant information on demand. The goal of the paper is to review and understand different text classification techniques and finding the best one out for different prospective. From reviews I propose method with the use best classification method to improve the performance of result and improve indexing. And show the comparison of different classification techniques.", "title": "" }, { "docid": "e878423a7583e8023fa6affe13a318d6", "text": "A digital clock and data recovery (CDR) is presented, which employs a low supply sensitivity scheme for a digitally controlled oscillator (DCO). A coupling network comprising capacitors, resistors, and coupling buffers enhances the supply variation immunity of the DCO and mitigates the jitter performance degradation. A supply variation-dependent bias generator produces the corresponding bias voltage to alleviate the supply variation with minimal area and power penalty. The proposed scheme improves 29.3 ps of peak-to-peak jitter and 11.5 dB of spur level, at 6 and 5 MHz 50 mVpp sinusoidal supply noise tone, respectively. Fabricated in a 65-nm CMOS process, the proposed CDR operates at 5-Gb/s data rate with BER . 10-12 for PRBS 31 and consumes 15.4 mW. The CDR occupies an active die area of 0.075 mm2.", "title": "" }, { "docid": "6dc1a6c032196a748e005ce49d735752", "text": "Network virtualization is a powerful way to run multiple architectures or experiments simultaneously on a shared infrastructure. However, making efficient use of the underlying resources requires effective techniques for virtual network embedding--mapping each virtual network to specific nodes and links in the substrate network. Since the general embedding problem is computationally intractable, past research restricted the problem space to allow efficient solutions, or focused on designing heuristic algorithms. In this paper, we advocate a different approach: rethinking the design of the substrate network to enable simpler embedding algorithms and more efficient use of resources, without restricting the problem space. In particular, we simplify virtual link embedding by: i) allowing the substrate network to split a virtual link over multiple substrate paths and ii) employing path migration to periodically re-optimize the utilization of the substrate network. We also explore node-mapping algorithms that are customized to common classes of virtual-network topologies. Our simulation experiments show that path splitting, path migration,and customized embedding algorithms enable a substrate network to satisfy a much larger mix of virtual networks", "title": "" }, { "docid": "ee510bbe7c7be6e0fb86a32d9f527be1", "text": "Internet communications with paths that include satellite link face some peculiar challenges, due to the presence of a long propagation wireless channel. In this paper, we propose a performance enhancing proxy (PEP) solution, called PEPsal, which is, to the best of the authors' knowledge, the first open source TCP splitting solution for the GNU/Linux operating systems. PEPsal improves the performance of a TCP connection over a satellite channel making use of the TCP Hybla, a TCP enhancement for satellite networks developed by the authors. The objective of the paper is to present and evaluate the PEPsal architecture, by comparing it with end to end TCP variants (NewReno, SACK, Hybla), considering both performance and reliability issues. Performance is evaluated by making use of a testbed set up at the University of Bologna, to study advanced transport protocols and architectures for Internet satellite communications", "title": "" }, { "docid": "c2e53358f9d78071fc5204624cf9d6ad", "text": "This paper explores how the adoption of mobile and social computing technologies has impacted upon the way in which we coordinate social group-activities. We present a diary study of 36 individuals that provides an overview of how group coordination is currently performed as well as the challenges people face. Our findings highlight that people primarily use open-channel communication tools (e.g., text messaging, phone calls, email) to coordinate because the alternatives are seen as either disrupting or curbing to the natural conversational processes. Yet the use of open-channel tools often results in conversational overload and a significant disparity of work between coordinating individuals. This in turn often leads to a sense of frustration and confusion about coordination details. We discuss how the findings argue for a significant shift in our thinking about the design of coordination support systems.", "title": "" }, { "docid": "488b0adfe43fc4dbd9412d57284fc856", "text": "We describe the results of an experiment in which several conventional programming languages, together with the functional language Haskell, were used to prototype a Naval Surface Warfare Center (NSWC) requirement for a Geometric Region Server. The resulting programs and development metrics were reviewed by a committee chosen by the Navy. The results indicate that the Haskell prototype took significantly less time to develop and was considerably more concise and easier to understand than the corresponding prototypes written in several different imperative languages, including Ada and C++. ∗This work was supported by the Advanced Research Project Agency and the Office of Naval Research under Arpa Order 8888, Contract N00014-92-C-0153.", "title": "" } ]
scidocsrr
d4306bb0059d1418f0cb09241742f867
Enterprise Architecture Management Patterns for Enterprise Architecture Visioning
[ { "docid": "73fdbdbff06b57195cde51ab5135ccbe", "text": "1 Abstract This paper describes five widely-applicable business strategy patterns. The initiate patterns where inspired Michael Porter's work on competitive strategy (1980). By applying the pattern form we are able to explore the strategies and consequences in a fresh light. The patterns form part of a larger endeavour to apply pattern thinking to the business domain. This endeavour seeks to map the business domain in patterns, this involves develop patterns, possibly based on existing literature, and mapping existing patterns into a coherent model of the business domain. If you find the paper interesting you might be interested in some more patterns that are currently (May 2005) in development. These describe in more detail how these strategies can be implemented: This paper is one of the most downloaded pieces on my website. I'd be interested to know more about who is downloading the paper, what use your making of it and any comments you have on it-allan@allankelly.net. Cost Leadership Build an organization that can produce your chosen product more cheaply than anyone else. You can then choose to undercut the opposition (and sell more) or sell at the same price (and make more profit per unit.) Differentiated Product Build a product that fulfils the same functions as your competitors but is clearly different, e.g. it is better quality, novel design, or carries a brand name. Customer will be prepared to pay more for your product than the competition. Market Focus You can't compete directly on cost or differentiation with the market leader; so, focus on a niche in the market. The niche will be smaller than the overall market (so sales will be lower) but the customer requirements will be different, serve these customers requirements better then the mass market and they will buy from you again and again. Sweet Spot Customers don't always want the best or the cheapest, so, produce a product that combines elements of differentiation with reasonable cost so you offer superior value. However, be careful, customer tastes", "title": "" } ]
[ { "docid": "3129b636e3739281ba59721765eeccb9", "text": "Despite the rapid adoption of Facebook as a means of photo sharing, minimal research has been conducted to understand user gratification behind this activity. In order to address this gap, the current study examines users’ gratifications in sharing photos on Facebook by applying Uses and Gratification (U&G) theory. An online survey completed by 368 respondents identified six different gratifications, namely, affection, attention seeking, disclosure, habit, information sharing, and social influence, behind sharing digital photos on Facebook. Some of the study’s prominent findings were: age was in positive correlation with disclosure and social influence gratifications; gender differences were identified among habit and disclosure gratifications; number of photos shared was negatively correlated with habit and information sharing gratifications. The study’s implications can be utilized to refine existing and develop new features and services bridging digital photos and social networking services.", "title": "" }, { "docid": "ae73f7c35c34050b87d8bf2bee81b620", "text": "D esigning a complex Web site so that it readily yields its information is a difficult task. The designer must anticipate the users' needs and structure the site accordingly. Yet users may have vastly differing views of the site's information, their needs may change over time, and their usage patterns may violate the designer's initial expectations. As a result, Web sites are all too often fossils cast in HTML, while user navigation is idiosyncratic and evolving. Understanding user needs requires understanding how users view the data available and how they actually use the site. For a complex site this can be difficult since user tests are expensive and time-consuming, and the site's server logs contain massive amounts of data. We propose a Web management assistant: a system that can process massive amounts of data about site usage Examining the potential use of automated adaptation to improve Web sites for visitors.", "title": "" }, { "docid": "251a47eb1a5307c5eba7372ce09ea641", "text": "A new class of target link flooding attacks (LFA) can cut off the Internet connections of a target area without being detected because they employ legitimate flows to congest selected links. Although new mechanisms for defending against LFA have been proposed, the deployment issues limit their usages since they require modifying routers. In this paper, we propose LinkScope, a novel system that employs both the end-to-end and the hopby-hop network measurement techniques to capture abnormal path performance degradation for detecting LFA and then correlate the performance data and traceroute data to infer the target links or areas. Although the idea is simple, we tackle a number of challenging issues, such as conducting large-scale Internet measurement through noncooperative measurement, assessing the performance on asymmetric Internet paths, and detecting LFA. We have implemented LinkScope with 7174 lines of C codes and the extensive evaluation in a testbed and the Internet show that LinkScope can quickly detect LFA with high accuracy and low false positive rate.", "title": "" }, { "docid": "33cab03ab9773efe22ba07dd461811ef", "text": "This paper describes a real-time feature-based stereo SLAM system that is robust and accurate in a wide variety of conditions –indoors, outdoors, with dynamic objects, changing light conditions, fast robot motions and large-scale loops. Our system follows a parallel-tracking-and-mapping strategy: a tracking thread estimates the camera pose at frame rate; and a mapping thread updates a keyframe-based map at a lower frequency. The stereo constraints of our system allow a robust initialization –avoiding the well-known bootstrapping problem in monocular systems– and the recovery of the real scale. Both aspects are essential for its practical use in real robotic systems that interact with the physical world. In this paper we provide the implementation details, an exhaustive evaluation of the system in public datasets and a comparison of most state-of-the-art feature detectors and descriptors on the presented system. For the benefit of the community, its code for ROS (Robot Operating System) has been released.", "title": "" }, { "docid": "815fe60934f0313c56e631d73b998c95", "text": "The scientific credibility of findings from clinical trials can be undermined by a range of problems including missing data, endpoint switching, data dredging, and selective publication. Together, these issues have contributed to systematically distorted perceptions regarding the benefits and risks of treatments. While these issues have been well documented and widely discussed within the profession, legislative intervention has seen limited success. Recently, a method was described for using a blockchain to prove the existence of documents describing pre-specified endpoints in clinical trials. Here, we extend the idea by using smart contracts - code, and data, that resides at a specific address in a blockchain, and whose execution is cryptographically validated by the network - to demonstrate how trust in clinical trials can be enforced and data manipulation eliminated. We show that blockchain smart contracts provide a novel technological solution to the data manipulation problem, by acting as trusted administrators and providing an immutable record of trial history.", "title": "" }, { "docid": "0a340a2dc4d9a6acd90d3bedad07f84a", "text": "BACKGROUND\nKhat (Catha edulis) contains a psychoactive substance, cathinone, which produces central nervous system stimulation analogous to amphetamine. It is believed that khat chewing has a negative impact on the physical and mental health of individuals as well as the socioeconomic condition of the family and the society at large. There is lack of community based studies regarding the link between khat use and poor mental health. The objective of this study was to evaluate the association between khat use and mental distress and to determine the prevalence of mental distress and khat use in Jimma City.\n\n\nMETHODS\nA cross-sectional community-based study was conducted in Jimma City from October 15 to November 15, 2009. The study used a structured questionnaire and Self Reporting Questionnaire-20 designed by WHO and which has been translated into Amharic and validated in Ethiopia. By multi stage sampling, 1200 individuals were included in the study. Data analysis was done using SPSS for window version 13.\n\n\nRESULTS\nThe Khat use prevalence was found to be 37.8% during the study period. Majority of the khat users were males (73.5%), age group 18-24 (41.1%), Muslims (46.6%), Oromo Ethnic group (47.2%), single (51.4%), high school students (46.8%) and employed (80%). Using cut-off point 7 out of 20 on the Self Reporting Questionnaire-20, 25.8% of the study population was found to have mental distress. Males (26.6%), persons older than 55 years (36.4%), Orthodox Christians (28.4%), Kefficho Ethnic groups (36.4%), widowed (44.8%), illiterates (43.8%) and farmers (40.0%) had higher rates of mental distress. We found that mental distress and khat use have significant association (34.7% Vs 20.5%, P<0.001). There was also significant association between mental distress and frequency of khat use (41% Vs 31.1%, P<0.001)\n\n\nCONCLUSION\nThe high rate of khat use among the young persons calls for public intervention to prevent more serious forms of substance use disorders. Our findings suggest that persons who use khat suffer from higher rates of mental distress. However, causal association could not be established due to cross-sectional study design.", "title": "" }, { "docid": "e914a66fc4c5b35e3fd24427ffdcbd96", "text": "This paper proposes two control algorithms for a sensorless speed control of a PMSM. One is a new low pass filter. This filter is designed to have the variable cutoff frequency according to the rotor speed. And the phase delay angle is so small as to be ignored not only in the low speed region but also in the high speed region including the field weakening region. Sensorless control of a PMSM can be guaranteed without any delay angle by using the proposed low pass filter. The other is a new iterative sliding mode observer (I-SMO). Generally the sliding mode observer (SMO) has the attractive features of the robustness to disturbances, and parameter variations. In the high speed region the switching gain of SMO must be large enough to operate the sliding mode stably. But the estimated currents and back EMF can not help having much ripple or chattering components especially in the high speed region including the flux weakening region. Using I-SMO can reduce chattering components of the estimated currents and back EMF in all speed regions without any help of the expensive hardware such as the high performance DSP and A/D converter. Experimental results show the usefulness of the proposed two algorithms for the sensorless drive system of a PMSM.", "title": "" }, { "docid": "70a94ef8bf6750cdb4603b34f0f1f005", "text": "What does this paper demonstrate. We show that a very simple 2D architecture (in the sense that it does not make any assumption or reasoning about the 3D information of the object) generally used for object classification, if properly adapted to the specific task, can provide top performance also for pose estimation. More specifically, we demonstrate how a 1-vs-all classification framework based on a Fisher Vector (FV) [1] pyramid or convolutional neural network (CNN) based features [2] can be used for pose estimation. In addition, suppressing neighboring viewpoints during training seems key to get good results.", "title": "" }, { "docid": "1bb694f68643eaf70e09ce086a77ea34", "text": "If you get the printed book in on-line book store, you may also find the same problem. So, you must move store to store and search for the available there. But, it will not happen here. The book that we will offer right here is the soft file concept. This is what make you can easily find and get this information security principles and practice by reading this site. We offer you the best product, always and always.", "title": "" }, { "docid": "d4a96cc393a3f1ca3bca94a57e07941e", "text": "With the increasing number of scientific publications, research paper recommendation has become increasingly important for scientists. Most researchers rely on keyword-based search or following citations in other papers, in order to find relevant research articles. And usually they spend a lot of time without getting satisfactory results. This study aims to propose a personalized research paper recommendation system, that facilitate this task by recommending papers based on users' explicit and implicit feedback. The users will be allowed to explicitly specify the papers of interest. In addition, user activities (e.g., viewing abstracts or full-texts) will be analyzed in order to enhance users' profiles. Most of the current research paper recommendation and information retrieval systems use the classical bag-of-words methods, which don't consider the context of the words and the semantic similarity between the articles. This study will use Recurrent Neural Networks (RNNs) to discover continuous and latent semantic features of the papers, in order to improve the recommendation quality. The proposed approach utilizes PubMed so far, since it is frequently used by physicians and scientists, but it can easily incorporate other datasets in the future.", "title": "" }, { "docid": "619165e7f74baf2a09271da789e724df", "text": "MOST verbal communication occurs in contexts where the listener can see the speaker as well as hear him. However, speech perception is normally regarded as a purely auditory process. The study reported here demonstrates a previously unrecognised influence of vision upon speech perception. It stems from an observation that, on being shown a film of a young woman's talking head, in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga], normal adults reported hearing [da]. With the reverse dubbing process, a majority reported hearing [bagba] or [gaba]. When these subjects listened to the soundtrack from the film, without visual input, or when they watched untreated film, they reported the syllables accurately as repetitions of [ba] or [ga]. Subsequent replications confirm the reliability of these findings; they have important implications for the understanding of speech perception.", "title": "" }, { "docid": "2e3f05ee44b276b51c1b449e4a62af94", "text": "We make some simple extensions to the Active Shape Model of Cootes et al. [4], and use it to locate features in frontal views of upright faces. We show on independent test data that with the extensions the Active Shape Model compares favorably with more sophisticated methods. The extensions are (i) fitting more landmarks than are actually needed (ii) selectively using twoinstead of one-dimensional landmark templates (iii) adding noise to the training set (iv) relaxing the shape model where advantageous (v) trimming covariance matrices by setting most entries to zero, and (vi) stacking two Active Shape Models in series.", "title": "" }, { "docid": "04384b62c17f9ff323db4d51bea86fe9", "text": "Imbalanced data widely exist in many high-impact applications. An example is in air traffic control, where among all three types of accident causes, historical accident reports with ‘personnel issues’ are much more than the other two types (‘aircraft issues’ and ‘environmental issues’) combined. Thus, the resulting data set of accident reports is highly imbalanced. On the other hand, this data set can be naturally modeled as a network, with each node representing an accident report, and each edge indicating the similarity of a pair of accident reports. Up until now, most existing work on imbalanced data analysis focused on the classification setting, and very little is devoted to learning the node representations for imbalanced networks. To bridge this gap, in this paper, we first propose Vertex-Diminished Random Walk (VDRW) for imbalanced network analysis. It is significantly different from the existing Vertex Reinforced Random Walk by discouraging the random particle to return to the nodes that have already been visited. This design is particularly suitable for imbalanced networks as the random particle is more likely to visit the nodes from the same class, which is a desired property for learning node representations. Furthermore, based on VDRW, we propose a semi-supervised network representation learning framework named ImVerde for imbalanced networks, where context sampling uses VDRW and the limited label information to create node-context pairs, and balanced-batch sampling adopts a simple under-sampling method to balance these pairs from different classes. Experimental results demonstrate that ImVerde based on VDRW outperforms stateof-the-art algorithms for learning network representations from imbalanced data.", "title": "" }, { "docid": "8c658d7663f9849a0759160886fc5690", "text": "The design and fabrication of a 76.5 GHz, planar, three beam antenna is presented. This antenna has greater than 31 dB of gain and sidelobes that are less than -29 dB below the main beam. This antenna demonstrates the ability to achieve very low sidelobes in a simple, compact, and planar structure. This is accomplished uniquely by feeding waveguide slots that are coupled to microstrip radiating elements. This illumination technique allows for a very low loss and highly efficient structure. Also, a novel beam-scanning concept is introduced. To orient a beam from bore sight it requires phase differences between the excitations of the successive elements. This is achieved by varying the width of the W-band waveguide. This simple, beam steering two-dimensional structure offers the advantage of easy manufacturing compared to present lens and alternative technologies.", "title": "" }, { "docid": "eb861eed8718e227fc2615bb6fcf0841", "text": "Immediate effects of verb-specific syntactic (subcategorization) information were found in a cross-modal naming experiment, a self-paced reading experiment, and an experiment in which eye movements were monitored. In the reading studies, syntactic misanalysis effects in sentence complements (e.g., \"The student forgot the solution was...\") occurred at the verb in the complement (e.g., was) for matrix verbs typically used with noun phrase complements but not for verbs typically used with sentence complements. In addition, a complementizer effect for sentence-complement-biased verbs was not due to syntactic misanalysis but was correlated with how strongly a particular verb prefers to be followed by the complementizer that. The results support models that make immediate use of lexically specific constraints, especially constraint-based models, but are problematic for lexical filtering models.", "title": "" }, { "docid": "16932e01fdea801f28ec6c4194f70352", "text": "Plum pox virus (PPV) causes the most economically-devastating viral disease in Prunus species. Unfortunately, few natural resistance genes are available for the control of PPV. Recessive resistance to some potyviruses is associated with mutations of eukaryotic translation initiation factor 4E (eIF4E) or its isoform eIF(iso)4E. In this study, we used an RNA silencing approach to manipulate the expression of eIF4E and eIF(iso)4E towards the development of PPV resistance in Prunus species. The eIF4E and eIF(iso)4E genes were cloned from plum (Prunus domestica L.). The sequence identity between plum eIF4E and eIF(iso)4E coding sequences is 60.4% at the nucleotide level and 52.1% at the amino acid level. Quantitative real-time RT-PCR analysis showed that these two genes have a similar expression pattern in different tissues. Transgenes allowing the production of hairpin RNAs of plum eIF4E or eIF(iso)4E were introduced into plum via Agrobacterium-mediated transformation. Gene expression analysis confirmed specific reduced expression of eIF4E or eIF(iso)4E in the transgenic lines and this was associated with the accumulation of siRNAs. Transgenic plants were challenged with PPV-D strain and resistance was evaluated by measuring the concentration of viral RNA. Eighty-two percent of the eIF(iso)4E silenced transgenic plants were resistant to PPV, while eIF4E silenced transgenic plants did not show PPV resistance. Physical interaction between PPV-VPg and plum eIF(iso)4E was confirmed. In contrast, no PPV-VPg/eIF4E interaction was observed. These results indicate that eIF(iso)4E is involved in PPV infection in plum, and that silencing of eIF(iso)4E expression can lead to PPV resistance in Prunus species.", "title": "" }, { "docid": "faea3dad1f13b8c4be3d4d5ffa88dcf1", "text": "Describing the latest advances in the field, Quantitative Risk Management covers the methods for market, credit and operational risk modelling. It places standard industry approaches on a more formal footing and explores key concepts such as loss distributions, risk measures and risk aggregation and allocation principles. The book’s methodology draws on diverse quantitative disciplines, from mathematical finance and statistics to econometrics and actuarial mathematics. A primary theme throughout is the need to satisfactorily address extreme outcomes and the dependence of key risk drivers. Proven in the classroom, the book also covers advanced topics like credit derivatives.", "title": "" }, { "docid": "ae28bc02e9f0891d8338980cd169ada4", "text": "We investigated the possibility of using a machine-learning scheme in conjunction with commercial wearable EEG-devices for translating listener's subjective experience of music into scores that can be used in popular on-demand music streaming services. Our study resulted into two variants, differing in terms of performance and execution time, and hence, subserving distinct applications in online streaming music platforms. The first method, NeuroPicks, is extremely accurate but slower. It is based on the well-established neuroscientific concepts of brainwave frequency bands, activation asymmetry index and cross frequency coupling (CFC). The second method, NeuroPicksVQ, offers prompt predictions of lower credibility and relies on a custom-built version of vector quantization procedure that facilitates a novel parameterization of the music-modulated brainwaves. Beyond the feature engineering step, both methods exploit the inherent efficiency of extreme learning machines (ELMs) so as to translate, in a personalized fashion, the derived patterns into a listener's score. NeuroPicks method may find applications as an integral part of contemporary music recommendation systems, while NeuroPicksVQ can control the selection of music tracks. Encouraging experimental results, from a pragmatic use of the systems, are presented.", "title": "" }, { "docid": "4ed47f48df37717148d985ad927b813f", "text": "Given an incorrect value produced during a failed program run (e.g., a wrong output value or a value that causes the program to crash), the backward dynamic slice of the value very frequently captures the faulty code responsible for producing the incorrect value. Although the dynamic slice often contains only a small percentage of the statements executed during the failed program run, the dynamic slice can still be large and thus considerable effort may be required by the programmer to locate the faulty code.In this paper we develop a strategy for pruning the dynamic slice to identify a subset of statements in the dynamic slice that are likely responsible for producing the incorrect value. We observe that some of the statements used in computing the incorrect value may also have been involved in computing correct values (e.g., a value produced by a statement in the dynamic slice of the incorrect value may also have been used in computing a correct output value prior to the incorrect value). For each such executed statement in the dynamic slice, using the value profiles of the executed statements, we compute a confidence value ranging from 0 to 1 - a higher confidence value corresponds to greater likelihood that the execution of the statement produced a correct value. Given a failed run involving execution of a single error, we demonstrate that the pruning of a dynamic slice by excluding only the statements with the confidence value of 1 is highly effective in reducing the size of the dynamic slice while retaining the faulty code in the slice. Our experiments show that the number of distinct statements in a pruned dynamic slice are 1.79 to 190.57 times less than the full dynamic slice. Confidence values also prioritize the statements in the dynamic slice according to the likelihood of them being faulty. We show that examining the statements in the order of increasing confidence values is an effective strategy for reducing the effort of fault location.", "title": "" }, { "docid": "e76b94af2a322cb90114ab51fde86919", "text": "In this paper, we introduce a new 2D modulation scheme referred to as OTFS (Orthogonal Time Frequency & Space) that multiplexes information QAM symbols over new class of carrier waveforms that correspond to localized pulses in a signal representation called the delay-Doppler representation. OTFS constitutes a far reaching generalization of conventional time and frequency modulations such as TDM and FDM and, from a broader perspective, it establishes a conceptual link between Radar and communication. The OTFS waveforms couple with the wireless channel in a way that directly captures the underlying physics, yielding a high-resolution delay-Doppler Radar image of the constituent reflectors. As a result, the time-frequency selective channel is converted into an invariant, separable and orthogonal interaction, where all received QAM symbols experience the same localized impairment and all the delay-Doppler diversity branches are coherently combined. The high resolution delay-Doppler separation of the reflectors enables OTFS to approach channel capacity with optimal performance-complexity tradeoff through linear scaling of spectral efficiency with the MIMO order and robustness to Doppler and multipath channel conditions. OTFS is an enabler for realizing the full promise of MUMIMO gains even in challenging 5G deployment settings where adaptation is unrealistic. 1. OTFS – A NEXT GENERATION MODULATION History teaches us that every transition to a new generation of wireless network involves a disruption in the underlying air interface: beginning with the transition from 2G networks based on single carrier GSM to 3G networks based on code division multiplexing (CDMA), then followed by the transition to contemporary 4G networks based on orthogonal frequency division multiplexing (OFDM). The decision to introduce a new air interface is made when the demands of a new generation of use cases cannot be met by legacy technology – in terms of performance, capabilities, or cost. As an example, the demands for higher capacity data services drove the transition from legacy interference-limited CDMA network (that have limited flexibility for adaptation and inferior achievable throughput) to a network based on an orthogonal narrowband OFDM that is optimally fit for opportunistic scheduling and achieves higher spectral efficiency. Emerging 5G networks are required to support diverse usage scenarios, as described for example in [1]. A fundamental requirement is multi-user MIMO, which holds the promise of massive increases in mobile broadband spectral efficiency using large numbers of antenna elements at the base-station in combination with advanced precoding techniques. This promise comes at the cost of very complex architectures that cannot practically achieve capacity using traditional OFDM techniques and suffers performance degradation in the presence of time and frequency selectivity ( [2] and [3]). Other important use cases include operation under non-trivial dynamic channel conditions (for example vehicle-to-vehicle and high-speed rail) where adaptation becomes unrealistic, rendering OFDM narrowband waveforms strictly suboptimal. As a result, one is once again faced with the dilemma of finding a better suited air interface where the new guiding philosophy is: When adaptation is not a possibility one should look for ways to eliminate the need to adapt. The challenge is to do that without sacrificing performance. To meet this challenge one should fuse together two contradictory principles – (1) the principle of spreading (as used in CDMA) to obtain resilience to narrowband interference and to exploit channel diversity gain for increased reliability under unpredictable channel conditions and (2) the principle of orthogonality (as used in OFDM) to simplify the channel coupling for achieving higher spectral densities with a superior performance-complexity tradeoff. OTFS is a modulation scheme that carries information QAM symbols over a new class of waveforms which are spread over both time and frequency while remaining roughly orthogonal to each other under general delay-Doppler channel impairments. The key characteristic of the OTFS waveforms is related to their optimal manner of interaction with the wireless reflectors. This interaction induces a simple and symmetric coupling", "title": "" } ]
scidocsrr
45546f22ce436ca94e598bd1cbae98eb
Noise2Noise: Learning Image Restoration without Clean Data
[ { "docid": "d8a7ab2abff4c2e5bad845a334420fe6", "text": "Tone-mapping operators (TMOs) are designed to generate perceptually similar low-dynamic-range images from high-dynamic-range ones. We studied the performance of 15 TMOs in two psychophysical experiments where observers compared the digitally generated tone-mapped images to their corresponding physical scenes. All experiments were performed in a controlled environment, and the setups were designed to emphasize different image properties: in the first experiment we evaluated the local relationships among intensity levels, and in the second one we evaluated global visual appearance among physical scenes and tone-mapped images, which were presented side by side. We ranked the TMOs according to how well they reproduced the results obtained in the physical scene. Our results show that ranking position clearly depends on the adopted evaluation criteria, which implies that, in general, these tone-mapping algorithms consider either local or global image attributes but rarely both. Regarding the question of which TMO is the best, KimKautz [\"Consistent tone reproduction,\" in Proceedings of Computer Graphics and Imaging (2008)] and Krawczyk [\"Lightness perception in tone reproduction for high dynamic range images,\" in Proceedings of Eurographics (2005), p. 3] obtained the better results across the different experiments. We conclude that more thorough and standardized evaluation criteria are needed to study all the characteristics of TMOs, as there is ample room for improvement in future developments.", "title": "" }, { "docid": "d5e2d1f3662d66f6d4cfc1c98e4de610", "text": "Compressed sensing (CS) enables significant reduction of MR acquisition time with performance guarantee. However, computational complexity of CS is usually expensive. To address this, here we propose a novel deep residual learning algorithm to reconstruct MR images from sparsely sampled k-space data. In particular, based on the observation that coherent aliasing artifacts from downsampled data has topologically simpler structure than the original image data, we formulate a CS problem as a residual regression problem and propose a deep convolutional neural network (CNN) to learn the aliasing artifacts. Experimental results using single channel and multi channel MR data demonstrate that the proposed deep residual learning outperforms the existing CS and parallel imaging algorithms. Moreover, the computational time is faster in several orders of magnitude.", "title": "" }, { "docid": "a5b0bf255205527c699c0cf3f7ee5270", "text": "This paper proposes a deep learning approach for accelerating magnetic resonance imaging (MRI) using a large number of existing high quality MR images as the training datasets. An off-line convolutional neural network is designed and trained to identify the mapping relationship between the MR images obtained from zero-filled and fully-sampled k-space data. The network is not only capable of restoring fine structures and details but is also compatible with online constrained reconstruction methods. Experimental results on real MR data have shown encouraging performance of the proposed method for efficient and accurate imaging.", "title": "" }, { "docid": "0771cd99e6ad19deb30b5c70b5c98183", "text": "We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.", "title": "" } ]
[ { "docid": "f2db57e59a2e7a91a0dff36487be3aa4", "text": "In this paper we attempt to answer two questions: (1) Why should we be interested in the security of control systems? And (2) What are the new and fundamentally different requirements and problems for the security of control systems? We also propose a new mathematical framework to analyze attacks against control systems. Within this framework we formulate specific research problems to (1) detect attacks, and (2) survive attacks.", "title": "" }, { "docid": "7abdb102a876d669bdf254f7d91121c1", "text": "OBJECTIVE\nRegular physical activity (PA) is important for maintaining long-term physical, cognitive, and emotional health. However, few older adults engage in routine PA, and even fewer take advantage of programs designed to enhance PA participation. Though most managed Medicare members have free access to the Silver Sneakers and EnhanceFitness PA programs, the vast majority of eligible seniors do not utilize these programs. The goal of this qualitative study was to better understand the barriers to and facilitators of PA and participation in PA programs among older adults.\n\n\nDESIGN\nThis was a qualitative study using focus group interviews.\n\n\nSETTING\nFocus groups took place at three Group Health clinics in King County, Washington.\n\n\nPARTICIPANTS\nFifty-two randomly selected Group Health Medicare members between the ages of 66 to 78 participated.\n\n\nMETHODS\nWe conducted four focus groups with 13 participants each. Focus group discussions were audio-recorded, transcribed, and analyzed using an inductive thematic approach and a social-ecological framework.\n\n\nRESULTS\nMen and women were nearly equally represented among the participants, and the sample was largely white (77%), well-educated (69% college graduates), and relatively physically active. Prominent barriers to PA and PA program participation were physical limitations due to health conditions or aging, lack of professional guidance, and inadequate distribution of information on available and appropriate PA options and programs. Facilitators included the motivation to maintain physical and mental health and access to affordable, convenient, and stimulating PA options.\n\n\nCONCLUSION\nOlder adult populations may benefit from greater support and information from their providers and health care systems on how to safely and successfully improve or maintain PA levels through later adulthood. Efforts among health care systems to boost PA among older adults may need to consider patient-centered adjustments to current PA programs, as well as alternative methods for promoting overall active lifestyle choices.", "title": "" }, { "docid": "8722ef3b845b4d44529bb13673edb5ce", "text": "Graph data mining is highly versatile, as it applies not only to graph data but to relational data, as long as it can be represented as pairs of relationships. However, modeling RDBs as graphs using existing methods is limited in describing semantics of the relational data. In this paper, we propose a two-phased graph-modeling framework that converts any RDB to a directed graph with richer semantics than previously allowed. We implemented the framework and used it for analyzing medical records of diabetes patients.", "title": "" }, { "docid": "cec04871de4a9f209dc7dbdd38a7eabc", "text": "Cascaded regression has been recently applied to reconstruct 3D faces from single 2D images directly in shape space, and has achieved state-of-the-art performance. We investigate thoroughly such cascaded regression based 3D face reconstruction approaches from four perspectives that are not well been studied: (1) the impact of the number of 2D landmarks; (2) the impact of the number of 3D vertices; (3) the way of using standalone automated landmark detection methods; (4) the convergence property. To answer these questions, a simplified cascaded regression based 3D face reconstruction method is devised. This can be integrated with standalone automated landmark detection methods and reconstruct 3D face shapes that have the same pose and expression as the input face images, rather than normalized pose and expression. An effective training method is also proposed by disturbing the automatically detected landmarks. Comprehensive evaluation experiments have been carried out to compare to other 3D face reconstruction methods. The results not only deepen the understanding of cascaded regression based 3D face reconstruction approaches, but also prove the effectiveness of the proposed method.", "title": "" }, { "docid": "7cd4efb34472aa2e7f8019c14137bf4e", "text": "In theory, the pose of a calibrated camera can be uniquely determined from a minimum of four coplanar but noncollinear points. In practice, there are many applications of camera pose tracking from planar targets and there is also a number of recent pose estimation algorithms which perform this task in real-time, but all of these algorithms suffer from pose ambiguities. This paper investigates the pose ambiguity for planar targets viewed by a perspective camera. We show that pose ambiguities - two distinct local minima of the according error function - exist even for cases with wide angle lenses and close range targets. We give a comprehensive interpretation of the two minima and derive an analytical solution that locates the second minimum. Based on this solution, we develop a new algorithm for unique and robust pose estimation from a planar target. In the experimental evaluation, this algorithm outperforms four state-of-the-art pose estimation algorithms", "title": "" }, { "docid": "31c0dc8f0a839da9260bb9876f635702", "text": "The application of a recently developed broadband beamformer to distinguish audio signals received from different directions is experimentally tested. The beamformer combines spatial and temporal subsampling using a nested array and multirate techniques which leads to the same region of support in the frequency domain for all subbands. This allows using the same beamformer for all subbands. The experimental set-up is presented and the recorded signals are analyzed. Results indicate that the proposed approach can be used to distinguish plane waves propagating with different direction of arrivals.", "title": "" }, { "docid": "0aa9cf3df59827add0cd5ec7c81515af", "text": "As more and more applications produce streaming data, clustering data streams has become an important technique for data and knowledge engineering. A typical approach is to summarize the data stream in real-time with an online process into a large number of so called micro-clusters. Micro-clusters represent local density estimates by aggregating the information of many data points in a defined area. On demand, a (modified) conventional clustering algorithm is used in a second offline step to recluster the micro-clusters into larger final clusters. For reclustering, the centers of the micro-clusters are used as pseudo points with the density estimates used as their weights. However, information about density in the area between micro-clusters is not preserved in the online process and reclustering is based on possibly inaccurate assumptions about the distribution of data within and between micro-clusters (e.g., uniform or Gaussian). This paper describes DBSTREAM, the first micro-cluster-based online clustering component that explicitly captures the density between micro-clusters via a shared density graph. The density information in this graph is then exploited for reclustering based on actual density between adjacent micro-clusters. We discuss the space and time complexity of maintaining the shared density graph. Experiments on a wide range of synthetic and real data sets highlight that using shared density improves clustering quality over other popular data stream clustering methods which require the creation of a larger number of smaller micro-clusters to achieve comparable results.", "title": "" }, { "docid": "acf86ba9f98825a032cebb0a98db4360", "text": "Malware is the root cause of many security threats on the Internet. To cope with the thousands of new malware samples that are discovered every day, security companies and analysts rely on automated tools to extract the runtime behavior of malicious programs. Of course, malware authors are aware of these tools and increasingly try to thwart their analysis techniques. To this end, malware code is often equipped with checks that look for evidence of emulated or virtualized analysis environments. When such evidence is found, the malware program behaves differently or crashes, thus showing a different “personality” than on a real system. Recent work has introduced transparent analysis platforms (such as Ether or Cobra) that make it significantly more difficult for malware programs to detect their presence. Others have proposed techniques to identify and bypass checks introduced by malware authors. Both approaches are often successful in exposing the runtime behavior of malware even when the malicious code attempts to thwart analysis efforts. However, these techniques induce significant performance overhead, especially for fine-grained analysis. Unfortunately, this makes them unsuitable for the analysis of current highvolume malware feeds. In this paper, we present a technique that efficiently detects when a malware program behaves differently in an emulated analysis environment and on an uninstrumented reference host. The basic idea is simple: we just compare the runtime behavior of a sample in our analysis system and on a reference machine. However, obtaining a robust and efficient comparison is very difficult. In particular, our approach consists of recording the interactions of the malware with the operating system in one run and using this information to deterministically replay the program in our analysis environment. Our experiments demonstrate that, by using our approach, one can efficiently detect malware samples that use a variety of techniques to identify emulated analysis environments.", "title": "" }, { "docid": "28037e911859b3cc0221452e82cac3fe", "text": "This paper proposes a real-time DSP- and FPGA-based implementation method of a space vector modulation (SVM) algorithm for an indirect matrix converter (IMC). Therefore, low-cost and compact control platform is built using a 32-bit fixed-point DSP (TMS320F2812) operating at 150 MHz and a SPARTAN 3E FPGA operating at 50 MHz. The method consists in using the event-manager modules of the DSP to build specified pulses at its PWM output peripherals, which are fed to the digital input ports of a FPGA. Moreover, a simple logical processing and delay times are thereafter implemented in the FPGA so as to synthesize the suitable gate pulse patterns for the semiconductor-controlled devices. It is shown that the proposed implementation method enables high switching frequency operation with high pulse resolution as well as a negligible propagation time for the generation of the gating pulses. Experimental results from an IMC prototype confirm the practical feasibility of the proposed technique.", "title": "" }, { "docid": "e640e1831db44482e04005657f1a0f43", "text": "Increasing emphasis has been placed on the use of effect size reporting in the analysis of social science data. Nonetheless, the use of effect size reporting remains inconsistent, and interpretation of effect size estimates continues to be confused. Researchers are presented with numerous effect sizes estimate options, not all of which are appropriate for every research question. Clinicians also may have little guidance in the interpretation of effect sizes relevant for clinical practice. The current article provides a primer of effect size estimates for the social sciences. Common effect sizes estimates, their use, and interpretations are presented as a guide for researchers.", "title": "" }, { "docid": "f9d2bfa400dc473e25586316d7c06536", "text": "Novel cloud computing algorithms and techniques are initially evaluated via testbeds, simulators and mathematical models of datacenter infrastructure. However, it can be difficult to perform cross validation of these platforms against realistic scale infrastructures due to the prohibitive costs involved. This paper describes an approach to evaluating a cloud simulator through an empirical study involving a micro datacenter of commodity Raspberry Pi devices. To demonstrate the methodology, we compare performance of real-world workloads on this physical infrastructure against corresponding models of the workloads and infrastructure on the CloudSim simulator. After modelling a Raspberry Pi micro datacenter in CloudSim, we claim that the simulator lacks sufficient accuracy for cloud infrastructure experiments.", "title": "" }, { "docid": "812f7807a3d05aa2a65acff1dd5d87d3", "text": "In this paper we present a novel framework for geolocalizing Unmanned Aerial Vehicles (UAVs) using only their onboard camera. The framework exploits the abundance of satellite imagery, along with established computer vision and deep learning methods, to locate the UAV in a satellite imagery map. It utilizes the contextual information extracted from the scene to attain increased geolocalization accuracy and enable navigation without the use of a Global Positioning System (GPS), which is advantageous in GPS-denied environments and provides additional enhancement to existing GPS-based systems. The framework inputs two images at a time, one captured using a UAV-mounted downlooking camera, and the other synthetically generated from the satellite map based on the UAV location within the map. Local features are extracted and used to register both images, a process that is performed recurrently to relate UAV motion to its actual map position, hence performing preliminary localization. A semantic shape matching algorithm is subsequently applied to extract and match meaningful shape information from both images, and use this information to improve localization accuracy. The framework is evaluated on two different datasets representing different geographical regions. Obtained results demonstrate the viability of proposed method and that the utilization of visual information can offer a promising approach for unconstrained UAV navigation and enable the aerial platform to be self-aware of its surroundings thus opening up new application domains or enhancing existing ones.", "title": "" }, { "docid": "815355c0a4322fa15af3a1112e56fc50", "text": "People believe that depth plays an important role in success of deep neural networks (DNN). However, this belief lacks solid theoretical justifications as far as we know. We investigate role of depth from perspective of margin bound. In margin bound, expected error is upper bounded by empirical margin error plus Rademacher Average (RA) based capacity term. First, we derive an upper bound for RA of DNN, and show that it increases with increasing depth. This indicates negative impact of depth on test performance. Second, we show that deeper networks tend to have larger representation power (measured by Betti numbers based complexity) than shallower networks in multi-class setting, and thus can lead to smaller empirical margin error. This implies positive impact of depth. The combination of these two results shows that for DNN with restricted number of hidden units, increasing depth is not always good since there is a tradeoff between positive and negative impacts. These results inspire us to seek alternative ways to achieve positive impact of depth, e.g., imposing margin-based penalty terms to cross entropy loss so as to reduce empirical margin error without increasing depth. Our experiments show that in this way, we achieve significantly better test performance.", "title": "" }, { "docid": "d5e5d79b8a06d4944ee0c3ddcd84ce4c", "text": "Recent years have observed a significant progress in information retrieval and natural language processing with deep learning technologies being successfully applied into almost all of their major tasks. The key to the success of deep learning is its capability of accurately learning distributed representations (vector representations or structured arrangement of them) of natural language expressions such as sentences, and effectively utilizing the representations in the tasks. This tutorial aims at summarizing and introducing the results of recent research on deep learning for information retrieval, in order to stimulate and foster more significant research and development work on the topic in the future.\n The tutorial mainly consists of three parts. In the first part, we introduce the fundamental techniques of deep learning for natural language processing and information retrieval, such as word embedding, recurrent neural networks, and convolutional neural networks. In the second part, we explain how deep learning, particularly representation learning techniques, can be utilized in fundamental NLP and IR problems, including matching, translation, classification, and structured prediction. In the third part, we describe how deep learning can be used in specific application tasks in details. The tasks are search, question answering (from either documents, database, or knowledge base), and image retrieval.", "title": "" }, { "docid": "72f307e6209f685442b7b194a28797e1", "text": "It has been argued that creativity evolved, at least in part, through sexual selection to attract mates. Recent research lends support to this view and has also demonstrated a link between certain dimensions of schizotypy, creativity, and short-term mating. The current study delves deeper into these relationships by focusing on engagement in creative activity and employing an expansive set of personality and mental health measures (Five Factor Model, schizotypy, anxiety, and depression). A general tendency to engage in everyday forms of creative activity was related to number of sexual partners within the past year in males only. Furthermore, schizotypy, anxiety, and Neuroticism were all indirectly related to short-term mating success, again for males only. The study provides additional support for predictions made by sexual selection theory that men have a higher drive for creative display, and that creativity is linked with higher short-term mating success. The study also provides support for the contention that certain forms of mental illness may still exist in the gene pool because particular personality traits associated with milder forms of mental illness (i.e., Neuroticism & schizotypy) are also associated directly with creativity and indirectly with short-term mating success.", "title": "" }, { "docid": "ec124d86a68ce9deb9035c059a6b32ed", "text": "Geographic Information System (GIS) is the processes of managing, manipulating, analyzing, updating and presenting metadata according to its geographic location, to be effectively used in different aspects of life [1]. Cloud Computing allowed the utilization of all computing resources and software as required through the web in a virtual computing environment [2], [3]. Different Application software and data are provided at the (virtual) server side to be used. GIS Cloud is the future of Web GIS, with capabilities of easy and fast: collecting, processing, analyzing, updating, rectifying, publishing geospatial data through the internet. It is a web-based- GIS Application to enable all users fast, easy informed decision-making at fair price. It provides the power of desktop GIS on a web-based platform at fair cost anywhere. It offers a JavaScript Application Programming Interface (API) and a REST API, to provide GIS functionality into an application or website that can be hosted by GIS Cloud or by a third-party. In this work we are presenting a GIS-Cloud application in AL-Kamaliah region, small town of Amman suburbs to demonstrate its practicality and functionality.", "title": "" }, { "docid": "77af12d87cd5827f35d92968d1888162", "text": "Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.", "title": "" }, { "docid": "0ae5df7af64f0069d691922d391f3c60", "text": "With the realization that more research is needed to explore external factors (e.g., pedagogy, parental involvement in the context of K-12 learning) and internal factors (e.g., prior knowledge, motivation) underlying student-centered mobile learning, the present study conceptually and empirically explores how the theories and methodologies of self-regulated learning (SRL) can help us analyze and understand the processes of mobile learning. The empirical data collected from two elementary science classes in Singapore indicates that the analytical SRL model of mobile learning proposed in this study can illuminate the relationships between three aspects of mobile learning: students’ self-reports of psychological processes, patterns of online learning behavior in the mobile learning environment (MLE), and learning achievement. Statistical analyses produce three main findings. First, student motivation in this case can account for whether and to what degree the students can actively engage in mobile learning activities metacognitively, motivationally, and behaviorally. Second, the effect of students’ self-reported motivation on their learning achievement is mediated by their behavioral engagement in a pre-designed activity in the MLE. Third, students’ perception of parental autonomy support is not only associated with their motivation in school learning, but also associated with their actual behaviors in self-regulating their learning. ! 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5392e45840929b05b549a64a250774e5", "text": "Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation–maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis–Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods.", "title": "" }, { "docid": "a8f9314a7426df51206a542c9d81896e", "text": "A fast and optimized dehazing algorithm for hazy images and videos is proposed in this work. Based on the observation that a hazy image exhibits low contrast in general, we restore the hazy image by enhancing its contrast. However, the overcompensation of the degraded contrast may truncate pixel values and cause information loss. Therefore, we formulate a cost function that consists of the contrast term and the information loss term. By minimizing the cost function, the proposed algorithm enhances the contrast and preserves the information optimally. Moreover, we extend the static image dehazing algorithm to real-time video dehazing. We reduce flickering artifacts in a dehazed video sequence by making transmission values temporally coherent. Experimental results show that the proposed algorithm effectively removes haze and is sufficiently fast for real-time dehazing applications. 2013 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
15a898a8d9df0467ca2ea8fc9063a030
How Many Workers to Ask?: Adaptive Exploration for Collecting High Quality Labels
[ { "docid": "904278b251c258d1dac9b652dcd7ee82", "text": "This paper addresses the repeated acquisition of labels for data items when the labeling is imperfect. We examine the improvement (or lack thereof) in data quality via repeated labeling, and focus especially on the improvement of training labels for supervised induction. With the outsourcing of small tasks becoming easier, for example via Rent-A-Coder or Amazon's Mechanical Turk, it often is possible to obtain less-than-expert labeling at low cost. With low-cost labeling, preparing the unlabeled part of the data can become considerably more expensive than labeling. We present repeated-labeling strategies of increasing complexity, and show several main results. (i) Repeated-labeling can improve label quality and model quality, but not always. (ii) When labels are noisy, repeated labeling can be preferable to single labeling even in the traditional setting where labels are not particularly cheap. (iii) As soon as the cost of processing the unlabeled data is not free, even the simple strategy of labeling everything multiple times can give considerable advantage. (iv) Repeatedly labeling a carefully chosen set of points is generally preferable, and we present a robust technique that combines different notions of uncertainty to select data points for which quality should be improved. The bottom line: the results show clearly that when labeling is not perfect, selective acquisition of multiple labels is a strategy that data miners should have in their repertoire; for certain label-quality/cost regimes, the benefit is substantial.", "title": "" }, { "docid": "a009fc320c5a61d8d8df33c19cd6037f", "text": "Over the past decade, crowdsourcing has emerged as a cheap and efficient method of obtaining solutions to simple tasks that are difficult for computers to solve but possible for humans. The popularity and promise of crowdsourcing markets has led to both empirical and theoretical research on the design of algorithms to optimize various aspects of these markets, such as the pricing and assignment of tasks. Much of the existing theoretical work on crowdsourcing markets has focused on problems that fall into the broad category of online decision making; task requesters or the crowdsourcing platform itself make repeated decisions about prices to set, workers to filter out, problems to assign to specific workers, or other things. Often these decisions are complex, requiring algorithms that learn about the distribution of available tasks or workers over time and take into account the strategic (or sometimes irrational) behavior of workers.\n As human computation grows into its own field, the time is ripe to address these challenges in a principled way. However, it appears very difficult to capture all pertinent aspects of crowdsourcing markets in a single coherent model. In this paper, we reflect on the modeling issues that inhibit theoretical research on online decision making for crowdsourcing, and identify some steps forward. This paper grew out of the authors' own frustration with these issues, and we hope it will encourage the community to attempt to understand, debate, and ultimately address them.", "title": "" } ]
[ { "docid": "d974b1ffafd9ad738303514f28a770b9", "text": "We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relativeentropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings.", "title": "" }, { "docid": "9afc04ce0ddde03789f4eaa4eab39e09", "text": "In this paper we propose a novel method for recognizing human actions by exploiting a multi-layer representation based on a deep learning based architecture. A first level feature vector is extracted and then a high level representation is obtained by taking advantage of a Deep Belief Network trained using a Restricted Boltzmann Machine. The classification is finally performed by a feed-forward neural network. The main advantage behind the proposed approach lies in the fact that the high level representation is automatically built by the system exploiting the regularities in the dataset; given a suitably large dataset, it can be expected that such a representation can outperform a hand-design description scheme. The proposed approach has been tested on two standard datasets and the achieved results, compared with state of the art algorithms, confirm its effectiveness.", "title": "" }, { "docid": "bdd69c3aabbe9f794d3ea732479b9c64", "text": "Modern imaging methods like computed tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images? Are certain strategies more efficient? Although there is a large literature devoted to understanding search in 2-D, relatively little is known about search in volumetric space. In recent years, with the ever-increasing popularity of volumetric medical imaging, this question has taken on increased importance as we try to understand, and ultimately reduce, errors in diagnostic radiology. In the current study, we asked 24 radiologists to search chest CTs for lung nodules that could indicate lung cancer. To search, radiologists scrolled up and down through a \"stack\" of 2-D chest CT \"slices.\" At each moment, we tracked eye movements in the 2-D image plane and coregistered eye position with the current slice. We used these data to create a 3-D representation of the eye movements through the image volume. Radiologists tended to follow one of two dominant search strategies: \"drilling\" and \"scanning.\" Drillers restrict eye movements to a small region of the lung while quickly scrolling through depth. Scanners move more slowly through depth and search an entire level of the lung before moving on to the next level in depth. Driller performance was superior to the scanners on a variety of metrics, including lung nodule detection rate, percentage of the lung covered, and the percentage of search errors where a nodule was never fixated.", "title": "" }, { "docid": "2bf619a1af1bab48b4b6f57df8f29598", "text": "Alcoholism and drug addiction have marked impacts on the ability of families to function. Much of the literature has been focused on adult members of a family who present with substance dependency. There is limited research into the effects of adolescent substance dependence on parenting and family functioning; little attention has been paid to the parents' experience. This qualitative study looks at the parental perspective as they attempted to adapt and cope with substance dependency in their teenage children. The research looks into family life and adds to family functioning knowledge when the identified client is a youth as opposed to an adult family member. Thirty-one adult caregivers of 21 teenagers were interviewed, resulting in eight significant themes: (1) finding out about the substance dependence problem; (2) experiences as the problems escalated; (3) looking for explanations other than substance dependence; (4) connecting to the parent's own history; (5) trying to cope; (6) challenges of getting help; (7) impact on siblings; and (8) choosing long-term rehabilitation. Implications of this research for clinical practice are discussed.", "title": "" }, { "docid": "9003737b3f3e2ac6a64d3a3fe1dd358b", "text": "Cultural influence has recently received significant attention from academics due to its vital role in the success or failure of a project. In the construction industry, several empirical investigations have examined the influence of culture on project management. The aim of this study is to determine the impact of project organizational culture on the performance of construction projects. A total of 199 completed construction projects in Vietnam with specific data gathering through questionnaires were analyzed. The findings reveal that contractor commitment to contract agreements is the most significant cultural factor affecting project performance. Goal alignment and reliance, contractor commitment, and worker orientation (i.e., commitment to workers) contribute to improved overall performance and participant satisfaction. Contractor commitment and cooperative orientation enhance labor productivity, whereas goal alignment and trust and contractor commitment ensure learning performance (i.e., learning from experience). The findings of this study may assist construction professionals in implementing practices that can contribute to the sustainability and success of construction projects.", "title": "" }, { "docid": "3be99b1ef554fde94742021e4782a2aa", "text": "This is the second part of a two-part paper that has arisen from the work of the IEEE Power Engineering Society's Multi-Agent Systems (MAS) Working Group. Part I of this paper examined the potential value of MAS technology to the power industry, described fundamental concepts and approaches within the field of multi-agent systems that are appropriate to power engineering applications, and presented a comprehensive review of the power engineering applications for which MAS are being investigated. It also defined the technical issues which must be addressed in order to accelerate and facilitate the uptake of the technology within the power and energy sector. Part II of this paper explores the decisions inherent in engineering multi-agent systems for applications in the power and energy sector and offers guidance and recommendations on how MAS can be designed and implemented. Given the significant and growing interest in this field, it is imperative that the power engineering community considers the standards, tools, supporting technologies, and design methodologies available to those wishing to implement a MAS solution for a power engineering problem. This paper describes the various options available and makes recommendations on best practice. It also describes the problem of interoperability between different multi-agent systems and proposes how this may be tackled.", "title": "" }, { "docid": "500a9d141bc6bbd0972703413abef637", "text": "It is found that some “important” twitter users’ words can influence the stock prices of certain stocks. The stock price of Tesla – a famous electric automobile company – for example, recently seen a huge rise after Elon Musk, the CEO of Tesla, updated his twitter about the self-driving motors. Besides, the Dow Jones and S&P 500 indexes dropped by about one percent after the Twitter account of Associated Press falsely posted the message about an explosion in the White House.", "title": "" }, { "docid": "e3cb1c3dbed312688e75baa4ee047ff8", "text": "Aggregation of amyloid-β (Aβ) by self-assembly into oligomers or amyloids is a central event in Alzheimer's disease. Coordination of transition-metal ions, mainly copper and zinc, to Aβ occurs in vivo and modulates the aggregation process. A survey of the impact of Cu(II) and Zn(II) on the aggregation of Aβ reveals some general trends: (i) Zn(II) and Cu(II) at high micromolar concentrations and/or in a large superstoichiometric ratio compared to Aβ have a tendency to promote amorphous aggregations (precipitation) over the ordered formation of fibrillar amyloids by self-assembly; (ii) metal ions affect the kinetics of Aβ aggregations, with the most significant impact on the nucleation phase; (iii) the impact is metal-specific; (iv) Cu(II) and Zn(II) affect the concentrations and/or the types of aggregation intermediates formed; (v) the binding of metal ions changes both the structure and the charge of Aβ. The decrease in the overall charge at physiological pH increases the overall driving force for aggregation but may favor more precipitation over fibrillation, whereas the induced structural changes seem more relevant for the amyloid formation.", "title": "" }, { "docid": "771ee12eec90c042b5c2320680ddb290", "text": "1. SUMMARY In the past decade educators have developed a myriad of tools to help novices learn to program. Different tools emerge as new features or combinations of features are employed. In this panel we consider the features of recent tools that have garnered significant interest in the computer science education community. These including narrative tools which support programming to tell a story (e.g., Alice [6], Jeroo [8]), visual programming tools which support the construction of programs through a drag-and-drop interface (e.g., JPie [3], Alice [6], Karel Universe), flow-model tools (e.g., Raptor [1], Iconic Programmer [2], VisualLogic) which construct programs through connecting program elements to represent order of computation, specialized output realizations (e.g., Lego Mindstorms [5], JES [7]) that provide execution feedback in nontextual ways, like multimedia or kinesthetic robotics, and tiered language tools (e.g., ProfessorJ [4], RoboLab) in which novices can use more sophisticated versions of a language as their expertise develops.", "title": "" }, { "docid": "07c5758f83352c87d6a4d1ade91e0aaf", "text": "There is a significant need for a realistic dataset on which to evaluate layout analysis methods and examine their performance in detail. This paper presents a new dataset (and the methodology used to create it) based on a wide range of contemporary documents. Strong emphasis is placed on comprehensive and detailed representation of both complex and simple layouts, and on colour originals. In-depth information is recorded both at the page and region level. Ground truth is efficiently created using a new semi-automated tool and stored in a new comprehensive XML representation, the PAGE format. The dataset can be browsed and searched via a web-based front end to the underlying database and suitable subsets (relevant to specific evaluation goals) can be selected and downloaded.", "title": "" }, { "docid": "275ab39cc1f72691beb17936632e7307", "text": "Web searchers sometimes struggle to find relevant information. Struggling leads to frustrating and dissatisfying search experiences, even if searchers ultimately meet their search objectives. Better understanding of search tasks where people struggle is important in improving search systems. We address this important issue using a mixed methods study using large-scale logs, crowd-sourced labeling, and predictive modeling. We analyze anonymized search logs from the Microsoft Bing Web search engine to characterize aspects of struggling searches and better explain the relationship between struggling and search success. To broaden our understanding of the struggling process beyond the behavioral signals in log data, we develop and utilize a crowd-sourced labeling methodology. We collect third-party judgments about why searchers appear to struggle and, if appropriate, where in the search task it became clear to the judges that searches would succeed (i.e., the pivotal query). We use our findings to propose ways in which systems can help searchers reduce struggling. Key components of such support are algorithms that accurately predict the nature of future actions and their anticipated impact on search outcomes. Our findings have implications for the design of search systems that help searchers struggle less and succeed more.", "title": "" }, { "docid": "d1852cf0f4a03f56104861d3985071da", "text": "Running economy (RE) is typically defined as the energy demand for a given velocity of submaximal running, and is determined by measuring the steady-state consumption of oxygen (VO2) and the respiratory exchange ratio. Taking body mass (BM) into consideration, runners with good RE use less energy and therefore less oxygen than runners with poor RE at the same velocity. There is a strong association between RE and distance running performance, with RE being a better predictor of performance than maximal oxygen uptake (VO2max) in elite runners who have a similar VO2max). RE is traditionally measured by running on a treadmill in standard laboratory conditions, and, although this is not the same as overground running, it gives a good indication of how economical a runner is and how RE changes over time. In order to determine whether changes in RE are real or not, careful standardisation of footwear, time of test and nutritional status are required to limit typical error of measurement. Under controlled conditions, RE is a stable test capable of detecting relatively small changes elicited by training or other interventions. When tracking RE between or within groups it is important to account for BM. As VO2 during submaximal exercise does not, in general, increase linearly with BM, reporting RE with respect to the 0.75 power of BM has been recommended. A number of physiological and biomechanical factors appear to influence RE in highly trained or elite runners. These include metabolic adaptations within the muscle such as increased mitochondria and oxidative enzymes, the ability of the muscles to store and release elastic energy by increasing the stiffness of the muscles, and more efficient mechanics leading to less energy wasted on braking forces and excessive vertical oscillation. Interventions to improve RE are constantly sought after by athletes, coaches and sport scientists. Two interventions that have received recent widespread attention are strength training and altitude training. Strength training allows the muscles to utilise more elastic energy and reduce the amount of energy wasted in braking forces. Altitude exposure enhances discrete metabolic aspects of skeletal muscle, which facilitate more efficient use of oxygen. The importance of RE to successful distance running is well established, and future research should focus on identifying methods to improve RE. Interventions that are easily incorporated into an athlete's training are desirable.", "title": "" }, { "docid": "48019a3106c6d74e4cfcc5ac596d4617", "text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.", "title": "" }, { "docid": "0c06c0e4fec9a2cc34c38161e142032d", "text": "We introduce a novel high-level security metrics objective taxonomization model for software-intensive systems. The model systematizes and organizes security metrics development activities. It focuses on the security level and security performance of technical systems while taking into account the alignment of metrics objectives with different business and other management goals. The model emphasizes the roles of security-enforcing mechanisms, the overall security quality of the system under investigation, and secure system lifecycle, project and business management. Security correctness, effectiveness and efficiency are seen as the fundamental measurement objectives, determining the directions for more detailed security metrics development. Integration of the proposed model with riskdriven security metrics development approaches is also discussed.", "title": "" }, { "docid": "c72e0e79f83b59af58e5d8bc7d9244d5", "text": "A novel deep learning architecture (XmasNet) based on convolutional neural networks was developed for the classification of prostate cancer lesions, using the 3D multiparametric MRI data provided by the PROSTATEx challenge. End-to-end training was performed for XmasNet, with data augmentation done through 3D rotation and slicing, in order to incorporate the 3D information of the lesion. XmasNet outperformed traditional machine learning models based on engineered features, for both train and test data. For the test data, XmasNet outperformed 69 methods from 33 participating groups and achieved the second highest AUC (0.84) in the PROSTATEx challenge. This study shows the great potential of deep learning for cancer imaging.", "title": "" }, { "docid": "4b54cf876d3ab7c7277605125055c6c3", "text": "We propose a practical method for L0 norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of L0 regularization. However, since the L0 norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected L0 regularized objective is differentiable with respect to the distribution parameters. We further propose the hard concrete distribution for the gates, which is obtained by “stretching” a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.", "title": "" }, { "docid": "e26dcac5bd568b70f41d17925593e7ef", "text": "Autoregressive generative models achieve the best results in density estimation tasks involving high dimensional data, such as images or audio. They pose density estimation as a sequence modeling task, where a recurrent neural network (RNN) models the conditional distribution over the next element conditioned on all previous elements. In this paradigm, the bottleneck is the extent to which the RNN can model long-range dependencies, and the most successful approaches rely on causal convolutions. Taking inspiration from recent work in meta reinforcement learning, where dealing with long-range dependencies is also essential, we introduce a new generative model architecture that combines causal convolutions with self attention. In this paper, we describe the resulting model and present state-of-the-art log-likelihood results on heavily benchmarked datasets: CIFAR-10 (2.85 bits per dim), 32× 32 ImageNet (3.80 bits per dim) and 64 × 64 ImageNet (3.52 bits per dim). Our implementation will be made available at anonymized.", "title": "" }, { "docid": "519e8ee14d170ce92eecc760e810ade4", "text": "Transcript-based annotation and pedigree analysis are two basic steps in the computational analysis of whole-exome sequencing experiments in genetic diagnostics and disease-gene discovery projects. Here, we present Jannovar, a stand-alone Java application as well as a Java library designed to be used in larger software frameworks for exome and genome analysis. Jannovar uses an interval tree to identify all transcripts affected by a given variant, and provides Human Genome Variation Society-compliant annotations both for variants affecting coding sequences and splice junctions as well as untranslated regions and noncoding RNA transcripts. Jannovar can also perform family-based pedigree analysis with Variant Call Format (VCF) files with data from members of a family segregating a Mendelian disorder. Using a desktop computer, Jannovar requires a few seconds to annotate a typical VCF file with exome data. Jannovar is freely available under the BSD2 license. Source code as well as the Java application and library file can be downloaded from http://compbio.charite.de (with tutorial) and https://github.com/charite/jannovar.", "title": "" }, { "docid": "d1cf416860dc8191bf2af370ae16a6bc", "text": "Cas1 integrase is the key enzyme of the clustered regularly interspaced short palindromic repeat (CRISPR)-Cas adaptation module that mediates acquisition of spacers derived from foreign DNA by CRISPR arrays. In diverse bacteria, the cas1 gene is fused (or adjacent) to a gene encoding a reverse transcriptase (RT) related to group II intron RTs. An RT-Cas1 fusion protein has been recently shown to enable acquisition of CRISPR spacers from RNA. Phylogenetic analysis of the CRISPR-associated RTs demonstrates monophyly of the RT-Cas1 fusion, and coevolution of the RT and Cas1 domains. Nearly all such RTs are present within type III CRISPR-Cas loci, but their phylogeny does not parallel the CRISPR-Cas type classification, indicating that RT-Cas1 is an autonomous functional module that is disseminated by horizontal gene transfer and can function with diverse type III systems. To compare the sequence pools sampled by RT-Cas1-associated and RT-lacking CRISPR-Cas systems, we obtained samples of a commercially grown cyanobacterium-Arthrospira platensis Sequencing of the CRISPR arrays uncovered a highly diverse population of spacers. Spacer diversity was particularly striking for the RT-Cas1-containing type III-B system, where no saturation was evident even with millions of sequences analyzed. In contrast, analysis of the RT-lacking type III-D system yielded a highly diverse pool but reached a point where fewer novel spacers were recovered as sequencing depth was increased. Matches could be identified for a small fraction of the non-RT-Cas1-associated spacers, and for only a single RT-Cas1-associated spacer. Thus, the principal source(s) of the spacers, particularly the hypervariable spacer repertoire of the RT-associated arrays, remains unknown.IMPORTANCE While the majority of CRISPR-Cas immune systems adapt to foreign genetic elements by capturing segments of invasive DNA, some systems carry reverse transcriptases (RTs) that enable adaptation to RNA molecules. From analysis of available bacterial sequence data, we find evidence that RT-based RNA adaptation machinery has been able to join with CRISPR-Cas immune systems in many, diverse bacterial species. To investigate whether the abilities to adapt to DNA and RNA molecules are utilized for defense against distinct classes of invaders in nature, we sequenced CRISPR arrays from samples of commercial-scale open-air cultures of Arthrospira platensis, a cyanobacterium that contains both RT-lacking and RT-containing CRISPR-Cas systems. We uncovered a diverse pool of naturally occurring immune memories, with the RT-lacking locus acquiring a number of segments matching known viral or bacterial genes, while the RT-containing locus has acquired spacers from a distinct sequence pool for which the source remains enigmatic.", "title": "" }, { "docid": "ec6e955f3f79ef1706fc6b9b16326370", "text": "Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in the recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of data for training. In this paper, we develop a photo-realistic simulator that can afford the generation of large amounts of training data (both images rendered from the UAV camera and its controls) to teach a UAV to autonomously race through challenging tracks. We train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing. Training is done through imitation learning enabled by data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots.", "title": "" } ]
scidocsrr
05a2b7b14c432f1a5d2c15002fedeb5b
A non-IID Framework for Collaborative Filtering with Restricted Boltzmann Machines
[ { "docid": "065c24bc712f7740b95e0d1a994bfe19", "text": "David Haussler Computer and Information Sciences University of California Santa Cruz Santa Cruz , CA 95064 We study a particular type of Boltzmann machine with a bipartite graph structure called a harmonium. Our interest is in using such a machine to model a probability distribution on binary input vectors . We analyze the class of probability distributions that can be modeled by such machines. showing that for each n ~ 1 this class includes arbitrarily good appwximations to any distribution on the set of all n-vectors of binary inputs. We then present two learning algorithms for these machines .. The first learning algorithm is the standard gradient ascent heuristic for computing maximum likelihood estimates for the parameters (i.e. weights and thresholds) of the modeL Here we give a closed form for this gradient that is significantly easier to compute than the corresponding gradient for the general Boltzmann machine . The second learning algorithm is a greedy method that creates the hidden units and computes their weights one at a time. This method is a variant of the standard method for projection pursuit density estimation . We give experimental results for these learning methods on synthetic data and natural data from the domain of handwritten digits.", "title": "" }, { "docid": "21756eeb425854184ba2ea722a935928", "text": "Collaborative filtering aims at learning predictive models of user preferences, interests or behavior from community data, that is, a database of available user preferences. In this article, we describe a new family of model-based algorithms designed for this task. These algorithms rely on a statistical modelling technique that introduces latent class variables in a mixture model setting to discover user communities and prototypical interest profiles. We investigate several variations to deal with discrete and continuous response variables as well as with different objective functions. The main advantages of this technique over standard memory-based methods are higher accuracy, constant time prediction, and an explicit and compact model representation. The latter can also be used to mine for user communitites. The experimental evaluation shows that substantial improvements in accucracy over existing methods and published results can be obtained.", "title": "" } ]
[ { "docid": "f80a07ad046587f7a303c7177e04bca5", "text": "In order to determine the impact of nitrogen deficiency in medium, growth rate and carotenoids contents were followed during 15 days in two strain Dunaliella spp. (DUN2 and DUN3), isolated respectively from Azla and Idao Iaaza saltworks in the Essaouira region (Morocco). These microalgae were incubated at 25 ± 1 °C with a salinity of 35‰ and continuous light in four growth media with different concentrations of sodium nitrate (NaNO3): 18.75 g/L, 2.5 g/L, 37.5 g/L and 75 g/L. Maximum of cell density was observed under high sodium nitrate concentration during logarithmic phase of growth. The highest specific growth rate was 0.450 × 10 ± 0.006 cells/mL and 2.680 × 10 ± 0.216 cells/mL, respectively for DUN2 and DUN3. Carotenoids production mean were not stimulated under nitrogen deficiency, and the highest content was showed in DUN2 at high nitrogen concentration (3.210 ± 0.261 μg·mL) compared with DUN3 strain.", "title": "" }, { "docid": "c10d33abc6ed1d47c11bf54ed38e5800", "text": "The past decade has seen a steady growth of interest in statistical language models for information retrieval, and much research work has been conducted on this subject. This book by ChengXiang Zhai summarizes most of this research. It opens with an introduction covering the basic concepts of information retrieval and statistical languagemodels, presenting the intuitions behind these concepts. This introduction is then followed by a chapter providing an overview of:", "title": "" }, { "docid": "1ec8f7bb8de36b625cb8fee335557acf", "text": "Airborne laser scanner technique is broadly the most appropriate way to acquire rapidly and with high density 3D data over a city. Once the 3D Lidar data are available, the next task is the automatic data processing, with major aim to construct 3D building models. Among the numerous automatic reconstruction methods, the techniques allowing the detection of 3D building roof planes are of crucial importance. Three main methods arise from the literature: region growing, Hough-transform and Random Sample Consensus (RANSAC) paradigm. Since region growing algorithms are sometimes not very transparent and not homogenously applied, this paper focuses only on the Hough-transform and the RANSAC algorithm. Their principles, their pseudocode rarely detailed in the related literature as well as their complete analyses are presented in this paper. An analytic comparison of both algorithms, in terms of processing time and sensitivity to cloud characteristics, shows that despite the limitation encountered in both methods, RANSAC algorithm is still more efficient than the first one. Under other advantages, its processing time is negligible even when the input data size is very large. On the other hand, Hough-transform is very sensitive to the segmentation parameters values. Therefore, RANSAC algorithm has been chosen and extended to exceed its limitations. Its major limitation is that it searches to detect the best mathematical plane among 3D building point cloud even if this plane does not always represent a roof plane. So the proposed extension allows harmonizing the mathematical aspect of the algorithm with the geometry of a roof. At last, it is shown that the extended approach provides very satisfying results, even in the case of very weak point density and for different levels of building complexity. Therefore, once the roof planes are successfully detected, the automatic building modelling can be carried out.", "title": "" }, { "docid": "842a1d2da67d614ecbc8470987ae85e9", "text": "The task of recovering three-dimensional (3-D) geometry from two-dimensional views of a scene is called 3-D reconstruction. It is an extremely active research area in computer vision. There is a large body of 3-D reconstruction algorithms available in the literature. These algorithms are often designed to provide different tradeoffs between speed, accuracy, and practicality. In addition, even the output of various algorithms can be quite different. For example, some algorithms only produce a sparse 3-D reconstruction while others are able to output a dense reconstruction. The selection of the appropriate 3-D reconstruction algorithm relies heavily on the intended application as well as the available resources. The goal of this paper is to review some of the commonly used motion-parallax-based 3-D reconstruction techniques and make clear the assumptions under which they are designed. To do so efficiently, we classify the reviewed reconstruction algorithms into two large categories depending on whether a prior calibration of the camera is required. Under each category, related algorithms are further grouped according to the common properties they share.", "title": "" }, { "docid": "a4267e0cd6300dc128bfe9de62322ac7", "text": "According to the most common definition, idioms are linguistic expressions whose overall meaning cannot be predicted from the meanings of the constituent parts Although we agree with the traditional view that there is no complete predictability, we suggest that there is a great deal of systematic conceptual motivation for the meaning of most idioms Since most idioms are based on conceptual metaphors and metonymies, systematic motivation arises from sets of 'conceptual mappings or correspondences' that obtain between a source and a target domain in the sense of Lakoff and Koiecses (1987) We distinguish among three aspects of idiomatic meaning First, the general meaning of idioms appears to be determined by the particular 'source domains' that apply to a particular target domain Second, more specific aspects ot idiomatic meaning are provided by the 'ontological mapping that applies to a given idiomatic expression Third, connotative aspects ot idiomatic meaning can be accounted for by 'epistemic correspondences' Finally, we also present an informal experimental study the results of which show that the cognitive semantic view can facilitate the learning of idioms for non-native speakers", "title": "" }, { "docid": "6b1a3fbdb384afded3f48dbe2978e171", "text": "This article provides a brief overview on the current development of software-defined mobile networks (SDMNs). Software defined networking is seen as a promising technology to manage the complexity in communication networks. The need for SDMN comes from the complexity of network management in 5G mobile networks and beyond, driven by increasing mobile traffic demand, heterogeneous wireless environments, and diverse service requirements. The need is strong to introduce new radio network architecture by taking advantage of software oriented design, the separation of the data and control planes, and network virtualization to manage complexity and offer flexibility in 5G networks. Clearly, software oriented design in mobile networks will be fundamentally different from SDN for the Internet, because mobile networks deal with the wireless access problem in complex radio environments, while the Internet mainly addresses the packet forwarding problem. Specific requirements in mobile networks shape the development of SDMN. In this article we present the needs and requirements of SDMN, with particular focus on the software-defined design for radio access networks. We analyze the fundamental problems in radio access networks that call for SDN design and present an SDMN concept. We give a brief overview on current solutions for SDMN and standardization activities. We argue that although SDN design is currently focusing on mobile core networks, extending SDN to radio access networks would naturally be the next step. We identify several research directions on SDN for radio access networks and expect more fundamental studies to release the full potential of software-defined 5G networks.", "title": "" }, { "docid": "ee9e24f38d7674e601ab13b73f3d37db", "text": "This paper presents the design of an application specific hardware for accelerating High Frequency Trading applications. It is optimized to achieve the lowest possible latency for interpreting market data feeds and hence enable minimal round-trip times for executing electronic stock trades. The implementation described in this work enables hardware decoding of Ethernet, IP and UDP as well as of the FAST protocol which is a common protocol to transmit market feeds. For this purpose, we developed a microcode engine with a corresponding instruction set as well as a compiler which enables the flexibility to support a wide range of applied trading protocols. The complete system has been implemented in RTL code and evaluated on an FPGA. Our approach shows a 4x latency reduction in comparison to the conventional Software based approach.", "title": "" }, { "docid": "22cd6bb300489d94a4b88f81de8b0cae", "text": "Closed circuit television systems (CCTV) are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.", "title": "" }, { "docid": "525c6aa72a83e3261e4ffeab508c15cd", "text": "One of the major differences between markets that follow a \" sharing economy \" paradigm and traditional two-sided markets is that, in the sharing economy, the supply side includes individual nonprofessional decision makers, in contrast to firms and professional agents. Using a data set of prices and availability of listings on Airbnb, we find that there exist substantial differences in the operational and financial performances between professional and nonprofessional hosts. In particular, properties managed by professional hosts earn 16.9% more in daily revenue, have 15.5% higher occupancy rates, and are 13.6% less likely to exit the market compared with properties owned by nonprofessional hosts, even after controlling for property and market characteristics. We demonstrate that these performance discrepancies between professionals and nonprofessionals can be partly explained by pricing inefficiencies. Specifically, we provide empirical evidence that nonprofessional hosts are less likely to offer different rates across stay dates based on the underlying demand, such as major holidays and conventions. We develop a parsimonious model to analyze the implications of having two such different host groups for a profit-maximizing platform operator and for a social planner. While a profit-maximizing platform operator should charge lower prices to nonprofessional hosts, a social planner would charge the same prices to professionals and nonprofessionals.", "title": "" }, { "docid": "4eabc161187126a726a6b65f6fc6c685", "text": "In this paper, we propose a new method to estimate synthetic aperture radar interferometry (InSAR) interferometric phase in the presence of large coregistration errors. The method takes advantage of the coherence information of neighboring pixel pairs to automatically coregister the SAR images and employs the projection of the joint signal subspace onto the corresponding joint noise subspace to estimate the terrain interferometric phase. The method can automatically coregister the SAR images and reduce the interferometric phase noise simultaneously. Theoretical analysis and computer simulation results show that the method can provide accurate estimate of the terrain interferometric phase (interferogram) as the coregistration error reaches one pixel. The effectiveness of the method is also verified with the real data from the Spaceborne Imaging Radar-C/X Band SAR and the European Remote Sensing 1 and 2 satellites.", "title": "" }, { "docid": "69cca12d008d18e8516460c211beca50", "text": "This paper discusses the effective coding of Rijndael algorithm, Advanced Encryption Standard (AES) in Hardware Description Language, Verilog. In this work we analyze the structure and design of new AES, following three criteria: a) resistance against all known attacks; b) speed and code compactness on a wide range of platforms; and c) design simplicity; as well as its similarities and dissimilarities with other symmetric ciphers. On the other side, the principal advantages of new AES with respect to DES, as well as its limitations, are investigated. Thus, for example, the fact that the new cipher and its inverse use different components, which practically eliminates the possibility for weak and semi-weak keys, as existing for DES, and the non-linearity of the key expansion, which practically eliminates the possibility of equivalent keys, are two of the principal advantages of new cipher. Finally, the implementation aspects of Rijndael cipher and its inverse are treated. Thus, although Rijndael is well suited to be implemented efficiently on a wide range of processors and in dedicated hardware, we have concentrated our study on 8-bit processors, typical for current Smart Cards and on 32-bit processors, typical for PCs.", "title": "" }, { "docid": "8f360c907e197beb5e6fc82b081c908f", "text": "This paper describes a 3D object-space paint program. This program allows the user to directly manipulate the parameters used to shade the surface of the 3D shape by applying pigment to its surface. The pigment has all the properties normally associated with material shading models. This includes, but is not limited to, the diffuse color, the specular color, and the surface roughness. The pigment also can have thickness, which is modeled by simultaneously creating a bump map attached to the shape. The output of the paint program is a 3D model with associated texture maps. This information can be used with any rendering program with texture mapping capabilities. Almost all traditional techniques of 2D computer image painting have analogues in 3D object painting, but there are also many new techniques unique to 3D. One example is the use of solid textures to pattern the surface.", "title": "" }, { "docid": "e81f3e4ba0e7d1f1bd0205a4ff9c0aaf", "text": "Marlowe-Crowne Social Desirability Scale (MC) (Crowne & Marlowe, 1960) scores were collected on 1096 individuals involved in forensic evaluations. No prior publication of forensic norms was found for this instrument, which provides a measure of biased self-presentation (dissimulation). MC mean score was 19.42 for the sample. Also calculated was the score on Form C (MC-C) (Reynolds, 1982), and the mean for this 13-item scale was 7.61. The scores for the current sample generally are higher than those published for non-forensic groups, and statistical analysis indicated the difference was significant for both the MC and MC-C (d =.75 and.70, respectively, p <.001). Neither gender nor educational level proved to be significant factors in accounting for variance, and age did not appear to be correlated with scores. Group membership of subjects based on referral reason (family violence, abuse, neglect, competency, disability) was significant for both the MC and MC-C scores. Results suggest the MC or MC-C can be useful as part of a forensic-assessment battery to measure biased self-presentation.", "title": "" }, { "docid": "aded7e5301d40faf52942cd61a1b54ba", "text": "In this paper, a lower limb rehabilitation robot in sitting position is developed for patients with muscle weakness. The robot is a stationary based type which is able to perform various types of therapeutic exercises. For safe operation, the robot's joint is driven by two-stage cable transmission while the balance mechanism is used to reduce actuator size and transmission ratio. Control algorithms for passive, assistive and resistive exercises are designed to match characteristics of each therapeutic exercises and patients with different muscle strength. Preliminary experiments conducted with a healthy subject have demonstrated that the robot and the control algorithms are promising for lower limb rehabilitation task.", "title": "" }, { "docid": "9de7af8824594b5de7d510c81585c61b", "text": "The adoption of business process improvement strategies is a challenge to organizations trying to improve the quality and productivity of their services. The quest for the benefits of this improvement on resource optimization and the responsiveness of the organizations has raised several proposals for business process improvement approaches. However, proposals and results of scientific research on process improvement in higher education institutions, extremely complex and unique organizations, are still scarce. This paper presents a method that provides guidance about how practices and knowledge are gathered to contribute for business process improvement based on the communication between different stakeholders.", "title": "" }, { "docid": "081b15c3dda7da72487f5a6e96e98862", "text": "The CEDAR real-time address block location system, which determines candidates for the location of the destination address from a scanned mail piece image, is described. For each candidate destination address block (DAB), the address block location (ABL) system determines the line segmentation, global orientation, block skew, an indication of whether the address appears to be handwritten or machine printed, and a value indicating the degree of confidence that the block actually contains the destination address. With 20-MHz Sparc processors, the average time per mail piece for the combined hardware and software system components is 0.210 seconds. The system located 89.0% of the addresses as the top choice. Recent developments in the system include the use of a top-down segmentation tool, address syntax analysis using only connected component data, and improvements to the segmentation refinement routines. This has increased top choice performance to 91.4%.<<ETX>>", "title": "" }, { "docid": "4a08c16c5e091e1c6212fc606ccd854a", "text": "The problem of predicting the position of a freely foraging rat based on the ensemble firing patterns of place cells recorded from the CA1 region of its hippocampus is used to develop a two-stage statistical paradigm for neural spike train decoding. In the first, or encoding stage, place cell spiking activity is modeled as an inhomogeneous Poisson process whose instantaneous rate is a function of the animal's position in space and phase of its theta rhythm. The animal's path is modeled as a Gaussian random walk. In the second, or decoding stage, a Bayesian statistical paradigm is used to derive a nonlinear recursive causal filter algorithm for predicting the position of the animal from the place cell ensemble firing patterns. The algebra of the decoding algorithm defines an explicit map of the discrete spike trains into the position prediction. The confidence regions for the position predictions quantify spike train information in terms of the most probable locations of the animal given the ensemble firing pattern. Under our inhomogeneous Poisson model position was a three to five times stronger modulator of the place cell spiking activity than theta phase in an open circular environment. For animal 1 (2) the median decoding error based on 34 (33) place cells recorded during 10 min of foraging was 8.0 (7.7) cm. Our statistical paradigm provides a reliable approach for quantifying the spatial information in the ensemble place cell firing patterns and defines a generally applicable framework for studying information encoding in neural systems.", "title": "" }, { "docid": "d775cdc31c84d94d95dc132b88a37fae", "text": "Image guided filtering has been widely used in many image processing applications. However, it is a local filtering method and has limited propagation ability. In this paper, we propose a new image filtering method: nonlocal image guided averaging (NLGA). Derived from a nonlocal linear model, the proposed method can utilize the nonlocal similarity of the guidance image, so that it can propagate nonlocal information reliably. Consequently, NLGA can obtain a sharper filtering results in the edge regions and more smooth results in the smooth regions. It shows superiority over image guided filtering in different applications, such as image dehazing, depth map super-resolution and image denoising.", "title": "" }, { "docid": "e07198de4fe8ea55f2c04ba5b6e9423a", "text": "Query expansion (QE) is a well known technique to improve retrieval effectiveness, which expands original queries with extra terms that are predicted to be relevant. A recent trend in the literature is Supervised Query Expansion (SQE), where supervised learning is introduced to better select expansion terms. However, an important but neglected issue for SQE is its efficiency, as applying SQE in retrieval can be much more time-consuming than applying Unsupervised Query Expansion (UQE) algorithms. In this paper, we point out that the cost of SQE mainly comes from term feature extraction, and propose a Two-stage Feature Selection framework (TFS) to address this problem. The first stage is adaptive expansion decision, which determines if a query is suitable for SQE or not. For unsuitable queries, SQE is skipped and no term features are extracted at all, which reduces the most time cost. For those suitable queries, the second stage is cost constrained feature selection, which chooses a subset of effective yet inexpensive features for supervised learning. Extensive experiments on four corpora (including three academic and one industry corpus) show that our TFS framework can substantially reduce the time cost for SQE, while maintaining its effectiveness.", "title": "" }, { "docid": "c2a2e9903859a6a9f9b3db5696cb37ff", "text": "Depth estimation from a single image is a fundamental problem in computer vision. In this paper, we propose a simple yet effective convolutional spatial propagation network (CSPN) to learn the affinity matrix for depth prediction. Specifically, we adopt an efficient linear propagation model, where the propagation is performed with a manner of recurrent convolutional operation, and the affinity among neighboring pixels is learned through a deep convolutional neural network (CNN). We apply the designed CSPN to two depth estimation tasks given a single image: (1) Refine the depth output from existing state-of-the-art (SOTA) methods; (2) Convert sparse depth samples to a dense depth map by embedding the depth samples within the propagation procedure. The second task is inspired by the availability of LiDAR that provides sparse but accurate depth measurements. We experimented the proposed CSPN over the popular NYU v2 [1] and KITTI [2] datasets, where we show that our proposed approach improves not only quality (e.g., 30% more reduction in depth error), but also speed (e.g., 2 to 5× faster) of depth maps than previous SOTA methods.", "title": "" } ]
scidocsrr
2611b587d31078d109c9407e274b3b78
Multi-view Sentence Representation Learning
[ { "docid": "a4bb8b5b749fb8a95c06a9afab9a17bb", "text": "Many Natural Language Processing applications nowadays rely on pre-trained word representations estimated from large text corpora such as news collections, Wikipedia and Web Crawl. In this paper, we show how to train high-quality word vector representations by using a combination of known tricks that are however rarely used together. The main result of our work is the new set of publicly available pre-trained models that outperform the current state of the art by a large margin on a number of tasks.", "title": "" }, { "docid": "5664ca8d7f0f2f069d5483d4a334c670", "text": "In Semantic Textual Similarity, systems rate the degree of semantic equivalence between two text snippets. This year, the participants were challenged with new data sets for English, as well as the introduction of Spanish, as a new language in which to assess semantic similarity. For the English subtask, we exposed the systems to a diversity of testing scenarios, by preparing additional OntoNotesWordNet sense mappings and news headlines, as well as introducing new genres, including image descriptions, DEFT discussion forums, DEFT newswire, and tweet-newswire headline mappings. For Spanish, since, to our knowledge, this is the first time that official evaluations are conducted, we used well-formed text, by featuring sentences extracted from encyclopedic content and newswire. The annotations for both tasks leveraged crowdsourcing. The Spanish subtask engaged 9 teams participating with 22 system runs, and the English subtask attracted 15 teams with 38 system runs.", "title": "" }, { "docid": "ccbb7e753b974951bb658b63e91431bb", "text": "In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.", "title": "" } ]
[ { "docid": "741efb8046bb888b944768784b87d70a", "text": "Entropy Search (ES) and Predictive Entropy Search (PES) are popular and empirically successful Bayesian Optimization techniques. Both rely on a compelling information-theoretic motivation, and maximize the information gained about the arg max of the unknown function; yet, both are plagued by the expensive computation for estimating entropies. We propose a new criterion, Max-value Entropy Search (MES), that instead uses the information about the maximum function value. We show relations of MES to other Bayesian optimization methods, and establish a regret bound. We observe that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden. In particular, MES is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.", "title": "" }, { "docid": "c10bf551bdb3cb6ae25f0f8803ba6fe7", "text": "The purpose of this study is to propose a theoretical model to examine the antecedents of repurchase intention in online group-buying by integrating the perspective of DeLone & McLean IS success model and the literature of trust. The model was tested using the data collected from 253 customers of a group-buying website in Taiwan. The results show that satisfaction with website, satisfaction with sellers, and perceived quality of website have positive influences on repurchase intention, while perceived quality of website and perceived quality of sellers have significant impacts on satisfaction with website and satisfaction with sellers, respectively. The results also show that trust in website has positive influences on perceived quality of website and satisfaction with website, whereas trust in sellers influence perceived quality of sellers and satisfaction with sellers significantly. Finally, the results show that perceived size of website has positive influence on trust in website, while reputation of website and reputation of sellers significantly affect trust in website and trust in sellers, respectively. The implications for theory and practice and suggestions for future research are also discussed. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6ddfb4631928eec4247adf2ac033129e", "text": "Facial micro-expression recognition is an upcoming area in computer vision research. Up until the recent emergence of the extensive CASMEII spontaneous micro-expression database, there were numerous obstacles faced in the elicitation and labeling of data involving facial micro-expressions. In this paper, we propose the Local Binary Patterns with Six Intersection Points (LBP-SIP) volumetric descriptor based on the three intersecting lines crossing over the center point. The proposed LBP-SIP reduces the redundancy in LBP-TOP patterns, providing a more compact and lightweight representation; leading to more efficient computational complexity. Furthermore, we also incorporated a Gaussian multi-resolution pyramid to our proposed approach by concatenating the patterns across all pyramid levels. Using an SVM classifier with leave-one-sample-out cross validation, we achieve the best recognition accuracy of 67.21%, surpassing the baseline performance with further computational efficiency.", "title": "" }, { "docid": "305f877227516eded75819bdf48ab26d", "text": "Deep generative models have been successfully applied to many applications. However, existing works experience limitations when generating large images (the literature usually generates small images, e.g. 32× 32 or 128× 128). In this paper, we propose a novel scheme, called deep tensor adversarial generative nets (TGAN), that generates large high-quality images by exploring tensor structures. Essentially, the adversarial process of TGAN takes place in a tensor space. First, we impose tensor structures for concise image representation, which is superior in capturing the pixel proximity information and the spatial patterns of elementary objects in images, over the vectorization preprocess in existing works. Secondly, we propose TGAN that integrates deep convolutional generative adversarial networks and tensor super-resolution in a cascading manner, to generate high-quality images from random distributions. More specifically, we design a tensor super-resolution process that consists of tensor dictionary learning and tensor coefficients learning. Finally, on three datasets, the proposed TGAN generates images with more realistic textures, compared with state-of-the-art adversarial autoencoders. The size of the generated images is increased by over 8.5 times, namely 374× 374 in PASCAL2.", "title": "" }, { "docid": "1dbaa72cd95c32d1894750357e300529", "text": "In recognizing the importance of educating aspiring scientists in the responsible conduct of research (RCR), the Office of Research Integrity (ORI) began sponsoring the creation of instructional resources to address this pressing need in 2002. The present guide on avoiding plagiarism and other inappropriate writing practices was created to help students, as well as professionals, identify and prevent such malpractices and to develop an awareness of ethical writing and authorship. This guide is one of the many products stemming from ORI’s effort to promote the RCR.", "title": "" }, { "docid": "8bd44a21a890e7c44fec4e56ddd39af2", "text": "This paper focuses on the problem of discovering users' topics of interest on Twitter. While previous efforts in modeling users' topics of interest on Twitter have focused on building a \"bag-of-words\" profile for each user based on his tweets, they overlooked the fact that Twitter users usually publish noisy posts about their lives or create conversation with their friends, which do not relate to their topics of interest. In this paper, we propose a novel framework to address this problem by introducing a modified author-topic model named twitter-user model. For each single tweet, our model uses a latent variable to indicate whether it is related to its author's interest. Experiments on a large dataset we crawled using Twitter API demonstrate that our model outperforms traditional methods in discovering user interest on Twitter.", "title": "" }, { "docid": "99f616b614d11993c387bb1b0ed1b7c6", "text": "Accurate assessment of nutrition information is an important part in the prevention and treatment of a multitude of diseases, but remains a challenging task. We present a novel mobile augmented reality application, which assists users in the nutrition assessment of their meals. Using the realtime camera image as a guide, the user overlays a 3D form of the food. Additionally the user selects the food type. The corresponding nutrition information is automatically computed. Thus accurate volume estimation is required for accurate nutrition information assessment. This work presents an evaluation of our mobile augmented reality approaches for portion estimation and offers a comparison to conventional portion estimation approaches. The comparison is performed on the basis of a user study (n=28). The quality of nutrition assessment is measured based on the error in energy units. In the results of the evaluation one of our mobile augmented reality approaches significantly outperforms all other methods. Additionally we present results on the efficiency and effectiveness of the approaches.", "title": "" }, { "docid": "be9fc2798c145abe70e652b7967c3760", "text": "Given semantic descriptions of object classes, zero-shot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of \"phantom\" object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.", "title": "" }, { "docid": "1cdcb24b61926f37037fbb43e6d379b7", "text": "The Internet has undergone dramatic changes in the past 2 decades and now forms a global communication platform that billions of users rely on for their daily activities. While this transformation has brought tremendous benefits to society, it has also created new threats to online privacy, such as omnipotent governmental surveillance. As a result, public interest in systems for anonymous communication has drastically increased. In this work, we survey previous research on designing, developing, and deploying systems for anonymous communication. Our taxonomy and comparative assessment provide important insights about the differences between the existing classes of anonymous communication protocols.", "title": "" }, { "docid": "732eb96d39d250e6b1355f7f4d53feed", "text": "Determine blood type is essential before administering a blood transfusion, including in emergency situation. Currently, these tests are performed manually by technicians, which can lead to human errors. Various systems have been developed to automate these tests, but none is able to perform the analysis in time for emergency situations. This work aims to develop an automatic system to perform these tests in a short period of time, adapting to emergency situations. To do so, it uses the slide test and image processing techniques using the IMAQ Vision from National Instruments. The image captured after the slide test is processed and detects the occurrence of agglutination. Next the classification algorithm determines the blood type in analysis. Finally, all the information is stored in a database. Thus, the system allows determining the blood type in an emergency, eliminating transfusions based on the principle of universal donor and reducing transfusion reactions risks.", "title": "" }, { "docid": "cd31be485b4b914508a5a9e7c5445459", "text": "Deep learning has become increasingly popular in both academic and industrial areas in the past years. Various domains including pattern recognition, computer vision, and natural language processing have witnessed the great power of deep networks. However, current studies on deep learning mainly focus on data sets with balanced class labels, while its performance on imbalanced data is not well examined. Imbalanced data sets exist widely in real world and they have been providing great challenges for classification tasks. In this paper, we focus on the problem of classification using deep network on imbalanced data sets. Specifically, a novel loss function called mean false error together with its improved version mean squared false error are proposed for the training of deep networks on imbalanced data sets. The proposed method can effectively capture classification errors from both majority class and minority class equally. Experiments and comparisons demonstrate the superiority of the proposed approach compared with conventional methods in classifying imbalanced data sets on deep neural networks.", "title": "" }, { "docid": "d75b9005a0a861e29977fda36780b947", "text": "Classifying traffic signs is an indispensable part of Advanced Driver Assistant Systems. This strictly requires that the traffic sign classification model accurately classifies the images and consumes as few CPU cycles as possible to immediately release the CPU for other tasks. In this paper, we first propose a new ConvNet architecture. Then, we propose a new method for creating an optimal ensemble of ConvNets with highest possible accuracy and lowest number of ConvNets. Our experiments show that the ensemble of our proposed ConvNets (the ensemble is also constructed using our method) reduces the number of arithmetic operations 88 and $$73\\,\\%$$ 73 % compared with two state-of-art ensemble of ConvNets. In addition, our ensemble is $$0.1\\,\\%$$ 0.1 % more accurate than one of the state-of-art ensembles and it is only $$0.04\\,\\%$$ 0.04 % less accurate than the other state-of-art ensemble when tested on the same dataset. Moreover, ensemble of our compact ConvNets reduces the number of the multiplications 95 and $$88\\,\\%$$ 88 % , yet, the classification accuracy drops only 0.2 and $$0.4\\,\\%$$ 0.4 % compared with these two ensembles. Besides, we also evaluate the cross-dataset performance of our ConvNet and analyze its transferability power in different layers. We show that our network is easily scalable to new datasets with much more number of traffic sign classes and it only needs to fine-tune the weights starting from the last convolution layer. We also assess our ConvNet through different visualization techniques. Besides, we propose a new method for finding the minimum additive noise which causes the network to incorrectly classify the image by minimum difference compared with the highest score in the loss vector.", "title": "" }, { "docid": "e30ae0b5cd90d091223ab38596de3109", "text": "1 Abstract We describe a consistent hashing algorithm which performs multiple lookups per key in a hash table of nodes. It requires no additional storage beyond the hash table, and achieves a peak-to-average load ratio of 1 + ε with just 1 + 1 ε lookups per key.", "title": "" }, { "docid": "845398d098de3ae423f02ad43f255cbb", "text": "This document describes COBRA-ONT, an ontology for supporting pervasive context-aware systems. COBRA-ONT, expressed in the Web Ontology Language OWL, is a collection of ontologies for describing places, agents and events and their associated properties in an intelligent meeting-room domain. This ontology is developed as a part of the Context Broker Architecture (CoBrA), a broker-centric agent architecture that provides knowledge sharing, context reasoning and privacy protection supports for pervasive context-aware systems. We also describe an inference engine for reasoning with information expressed using the COBRA-ONT ontology and the ongoing research in using the DAML-Time ontology for context reasoning.", "title": "" }, { "docid": "f9eff7a4652f6242911f41ba180f75ed", "text": "The last ten years have seen a significant increase in computationally relevant research seeking to build models of narrative and its use. These efforts have focused in and/or drawn from a range of disciplines, including narrative theory Many of these research efforts have been informed by a focus on the development of an explicit model of narrative and its function. Computational approaches from artificial intelligence (AI) are particularly well-suited to such modeling tasks, as they typically involve precise definitions of aspects of some domain of discourse and well-defined algorithms for reasoning over those definitions. In the case of narrative modeling, there is a natural fit with AI techniques. AI approaches often concern themselves with representing and reasoning about some real world domain of discourse – a microworld where inferences must be made in order to draw conclusions about some higher order property of the world or to explain, predict, control or communicate about the microworld's dynamic state. In this regard, the fictional worlds created by storytellers and the ways that we communicate about them suggest promising and immediate analogs for application of existing AI methods. One of the most immediate analogs between AI research and narrative models lies in the area of reasoning about actions and plans. The goals and plans that characters form and act upon within a story are the primary elements of the story's plot. At first glance, story plans have many of the same features as knowledge representations developed by AI researchers to characterize the plans formed by industrial robots operating to assemble automobile parts on a factory floor or by autonomous vehicles traversing unknown physical landscapes. As we will discuss below, planning representations have offered significant promise in modeling plot structure. Equally as significantly, however, is their ability to be used by intelligent algorithms in the automatic creation of plot lines. Just as AI planning systems can produce new plans to achieve an agent's goals in the face of a unanticipated execution context, so too may planning systems work to produce the plans of a collection of characters as they scheme to obtain, thwart, overcome or succeed.", "title": "" }, { "docid": "cf413b8e64aabbf7f3c1714759eb2ec7", "text": "Many significant real-world classification tasks involve a large number of categories which are arranged in a hierarchical structure; for example, classifying documents into subject categories under the library of congress scheme, or classifying world-wide-web documents into topic hierarchies. We investigate the potential benefits of using a given hierarchy over base classes to learn accurate multi-category classifiers for these domains. First, we consider the possibility of exploiting a class hierarchy as prior knowledge that can help one learn a more accurate classifier. We explore the benefits of learning categorydiscriminants in a “hard” top-down fashion and compare this to a “soft” approach which shares training data among sibling categories. In doing so, we verify that hierarchies have the potential to improve prediction accuracy. But we argue that the reasons for this can be subtle. Sometimes, the improvement is only because using a hierarchy happens to constrain the expressiveness of a hypothesis class in an appropriate manner. However, various controlled experiments show that in other cases the performance advantage associated with using a hierarchy really does seem to be due to the “prior knowledge” it encodes.", "title": "" }, { "docid": "2759e52ca38436b7f07bd64e6092884f", "text": "This paper proposes a method of eye-model-based gaze estimation by RGB-D camera, Kinect sensor. Different from other methods, our method sets up a model to calibrate the eyeball center by gazing at a target in 3D space, not predefined. And then by detecting the pupil center, we can estimate the gaze direction. To achieve this algorithm, we first build a head model relying on Kinect sensor, then obtaining the 3D information of pupil center. As we need to know the eyeball center position in head model, we do a calibration by designing a target to gaze. Because the ray from eyeball center to target and the ray from eyeball center to pupil center should meet a relationship, we can have an equation to solve the real eyeball center position. After calibration, we can have a gaze estimation automatically at any time. Our method allows free head motion and it only needs a simple device, finally it also can run automatically in real-time. Experiments show that our method performs well and still has a room for improvement.", "title": "" }, { "docid": "4fb76fb4daa5490dca902c9177c9b465", "text": "An improved faster region-based convolutional neural network (R-CNN) [same object retrieval (SOR) faster R-CNN] is proposed to retrieve the same object in different scenes with few training samples. By concatenating the feature maps of shallow and deep convolutional layers, the ability of Regions of Interest (RoI) pooling to extract more detailed features is improved. In the training process, a pretrained CNN model is fine-tuned using a query image data set, so that the confidence score can identify an object proposal to the object level rather than the classification level. In the query process, we first select the ten images for which the object proposals have the closest confidence scores to the query object proposal. Then, the image for which the detected object proposal has the minimum cosine distance to the query object proposal is considered as the query result. The proposed SOR faster R-CNN is applied to our Coke cans data set and three public image data sets, i.e., Oxford Buildings 5k, Paris Buildings 6k, and INS 13. The experimental results confirm that SOR faster R-CNN has better identification performance than fine-tuned faster R-CNN. Moreover, SOR faster R-CNN achieves much higher accuracy for detecting low-resolution images than the fine-tuned faster R-CNN on the Coke cans (0.094 mAP higher), Oxford Buildings (0.043 mAP higher), Paris Buildings (0.078 mAP higher), and INS 13 (0.013 mAP higher) data sets.", "title": "" }, { "docid": "1ff5526e4a18c1e59b63a3de17101b11", "text": "Plug-in electric vehicles (PEVs) are equipped with onboard level-1 or level-2 chargers for home overnight or office daytime charging. In addition, off-board chargers can provide fast charging for traveling long distances. However, off-board high-power chargers are bulky, expensive, and require comprehensive evolution of charging infrastructures. An integrated onboard charger capable of fast charging of PEVs will combine the benefits of both the conventional onboard and off-board chargers, without additional weight, volume, and cost. In this paper, an innovative single-phase integrated charger, using the PEV propulsion machine and its traction converter, is introduced. The charger topology is capable of power factor correction and battery voltage/current regulation without any bulky add-on components. Ac machine windings are utilized as mutually coupled inductors, to construct a two-channel interleaved boost converter. The circuit analyses of the proposed technology, based on a permanent magnet synchronous machine (PMSM), are discussed in details. Experimental results of a 3-kW proof-of-concept prototype are carried out using a ${\\textrm{220-V}}_{{\\rm{rms}}}$, 3-phase, 8-pole PMSM. A nearly unity power factor and 3.96% total harmonic distortion of input ac current are acquired with a maximum efficiency of 93.1%.", "title": "" }, { "docid": "6821d4c1114e007453578dd90600db15", "text": "Our goal is to assess the strategic and operational benefits of electronic integration for industrial procurement. We conduct a field study with an industrial supplier and examine the drivers of performance of the procurement process. Our research quantifies both the operational and strategic impacts of electronic integration in a B2B procurement environment for a supplier. Additionally, we show that the customer also obtains substantial benefits from efficient procurement transaction processing. We isolate the performance impact of technology choice and ordering processes on both the trading partners. A significant finding is that the supplier derives large strategic benefits when the customer initiates the system and the supplier enhances the system’s capabilities. With respect to operational benefits, we find that when suppliers have advanced electronic linkages, the order-processing system significantly increases benefits to both parties. (Business Value of IT; Empirical Assessment; Electronic Integration; Electronic Procurement; B2B; Strategic IT Impact; Operational IT Impact)", "title": "" } ]
scidocsrr
5482469ec3f304c0e5052cf269e6e52e
Velocity and Acceleration Cones for Kinematic and Dynamic Constraints on Omni-Directional Mobile Robots
[ { "docid": "b09dd4fee4d7cdce61c153a822eadb65", "text": "A dynamic model is presented for omnidirectional wheeled mobile robots, including wheel/motion surface slip. We derive the dynamics model, experimentally measure friction coefficients, and measure the force to cause slip (to validate our friction model). Dynamic simulation examples are presented to demonstrate omnidirectional motion with slip. After developing an improved friction model, compared to our initial model, the simulation results agree well with experimentally-measured trajectory data with slip. Initially, we thought that only high robot velocity and acceleration governed the resulting slipping motion. However, we learned that the rigid material existing in the discontinuities between omnidirectional wheel rollers plays an equally important role in determining omnidirectional mobile robot dynamic slip motion, even at low rates and accelerations.", "title": "" } ]
[ { "docid": "62fa4f8712a4fcc1a3a2b6148bd3589b", "text": "In this paper we discuss the development and application of a large formal ontology to the semantic web. The Suggested Upper Merged Ontology (SUMO) (Niles & Pease, 2001) (SUMO, 2002) is a “starter document” in the IEEE Standard Upper Ontology effort. This upper ontology is extremely broad in scope and can serve as a semantic foundation for search, interoperation, and communication on the semantic web.", "title": "" }, { "docid": "c8a2ba8f47266d0a63281a5abb5fa47f", "text": "Hair plays an important role in human appearance. However, hair segmentation is still a challenging problem partially due to the lack of an effective model to handle its arbitrary shape variations. In this paper, we present a part-based model robust to hair shape and environment variations. The key idea of our method is to identify local parts by promoting the effectiveness of the part-based model. To this end, we propose a measurable statistic, called Subspace Clustering Dependency (SC-Dependency), to estimate the co-occurrence probabilities between local shapes. SC-Dependency guarantees output reasonability and allows us to evaluate the effectiveness of part-wise constraints in an information-theoretic way. Then we formulate the part identification problem as an MRF that aims to optimize the effectiveness of the potential functions. Experiments are performed on a set of consumer images and show our algorithm's capability and robustness to handle hair shape variations and extreme environment conditions.", "title": "" }, { "docid": "bfd834ddda77706264fa458302549325", "text": "Deep learning has emerged as a new methodology with continuous interests in artificial intelligence, and it can be applied in various business fields for better performance. In fashion business, deep learning, especially Convolutional Neural Network (CNN), is used in classification of apparel image. However, apparel classification can be difficult due to various apparel categories and lack of labeled image data for each category. Therefore, we propose to pre-train the GoogLeNet architecture on ImageNet dataset and fine-tune on our fine-grained fashion dataset based on design attributes. This will complement the small size of dataset and reduce the training time. After 10-fold experiments, the average final test accuracy results 62%.", "title": "" }, { "docid": "317b7998eb27384c1655dd9f4dca1787", "text": "Composite rhytidectomy added the repositioning of the orbicularis oculi muscle to the deep plane face lift to achieve a more harmonious appearance of the face by adding periorbital rejuvenation. By not separating the orbicularis oculi from the zygomaticus minor and by extending the dissection under medial portions of the zygomaticus major and minor muscles, a more significant improvement in composite rhytidectomy can now be achieved. A thin nonrestrictive mesentery between the deep plane face lift dissection and the zygorbicular dissection still allows vertical movement of the composite face lift flap without interrupting the intimate relationship between the platysma, cheek fat, and orbicularis oculi muscle. This modification eliminates the occasional prolonged edema and occasional temporary dystonia previously observed. It allows the continuation of the use of the arcus marginalis release, which has also been modified by resetting the septum orbitale over the orbital rim. These two modifications allow a more predictable and impressive result. They reinforce the concept of periorbital rejuvenation as an integral part of facial rejuvenation, which not only produces a more harmonious immediate result but prevents the possible unfavorable sequelae of conventional rhytidectomy and lower blepharoplasty.", "title": "" }, { "docid": "9b37cc1d96d9a24e500c572fa2cb339a", "text": "Site-based or topic-specific search engines work with mixed success because of the general difficulty of the information retrieval task, and the lack of good link information to allow authorities to be identified. We are advocating an open source approach to the problem due to its scope and need for software components. We have adopted a topic-based search engine because it represents the next generation of capability. This paper outlines our scalable system for site-based or topic-specific search, and demonstrates the developing system on a small 250,000 document collection of EU and UN web pages.", "title": "" }, { "docid": "02d8c55750904b7f4794139bcfa51693", "text": "BACKGROUND\nMore than one-third of deaths during the first five years of life are attributed to undernutrition, which are mostly preventable through economic development and public health measures. To alleviate this problem, it is necessary to determine the nature, magnitude and determinants of undernutrition. However, there is lack of evidence in agro-pastoralist communities like Bule Hora district. Therefore, this study assessed magnitude and factors associated with undernutrition in children who are 6-59 months of age in agro-pastoral community of Bule Hora District, South Ethiopia.\n\n\nMETHODS\nA community based cross-sectional study design was used to assess the magnitude and factors associated with undernutrition in children between 6-59 months. A structured questionnaire was used to collect data from 796 children paired with their mothers. Anthropometric measurements and determinant factors were collected. SPSS version 16.0 statistical software was used for analysis. Bivariate and multivariate logistic regression analyses were conducted to identify factors associated to nutritional status of the children Statistical association was declared significant if p-value was less than 0.05.\n\n\nRESULTS\nAmong study participants, 47.6%, 29.2% and 13.4% of them were stunted, underweight, and wasted respectively. Presence of diarrhea in the past two weeks, male sex, uneducated fathers and > 4 children ever born to a mother were significantly associated with being underweight. Presence of diarrhea in the past two weeks, male sex and pre-lacteal feeding were significantly associated with stunting. Similarly, presence of diarrhea in the past two weeks, age at complementary feed was started and not using family planning methods were associated to wasting.\n\n\nCONCLUSION\nUndernutrition is very common in under-five children of Bule Hora district. Factors associated to nutritional status of children in agro-pastoralist are similar to the agrarian community. Diarrheal morbidity was associated with all forms of Protein energy malnutrition. Family planning utilization decreases the risk of stunting and underweight. Feeding practices (pre-lacteal feeding and complementary feeding practice) were also related to undernutrition. Thus, nutritional intervention program in Bule Hora district in Ethiopia should focus on these factors.", "title": "" }, { "docid": "dd92ee7d7f38cda187bfb26e9d4d258b", "text": "Crowdsourcing” is a relatively recent concept that encompasses many practices. This diversity leads to the blurring of the limits of crowdsourcing that may be identified virtually with any type of Internet-based collaborative activity, such as co-creation or user innovation. Varying definitions of crowdsourcing exist and therefore, some authors present certain specific examples of crowdsourcing as paradigmatic, while others present the same examples as the opposite. In this paper, existing definitions of crowdsourcing are analyzed to extract common elements and to establish the basic characteristics of any crowdsourcing initiative. Based on these existing definitions, an exhaustive and consistent definition for crowdsourcing is presented and contrasted in eleven cases.", "title": "" }, { "docid": "03a55678d5f25f710274323abf71f48c", "text": "Ontologies are an explicit specification of a conceptualization, that is understood to be an abstract and simplified version of the world to be represented. In recent years, ontologies have been used in Ubiquitous Computing, especially for the development of context-aware applications. In this paper, we offer a taxonomy for classifying ontologies used in Ubiquitous Computing, in which two main categories are distinguished: Domain ontologies, created to represent and communicate agreed knowledge within some sub-domain of Ubiquitous Computing; and Ontologies as software artifacts, when ontologies play the role of an additional type of artifact in ubiquitous computing applications. The latter category is subdivided according with the moment in that ontologies are used: at development time or at run time. Also, we analyze and classify (based on this taxonomy) some recently published works.", "title": "" }, { "docid": "72f3800a072c2844f6ec145788c0749e", "text": "In Augmented Reality (AR), interfaces consist of a blend of both real and virtual content. In this paper we examine existing gaming styles played in the real world or on computers. We discuss the strengths and weaknesses of these mediums within an informal model of gaming experience split into four aspects; physical, mental, social and emotional. We find that their strengths are mostly complementary, and argue that games built in AR can blend them to enhance existing game styles and open up new ones. To illustrate these ideas, we present our work on AR Worms, a re-implementation of the classic computer game Worms using Augmented Reality. We discuss how AR has enabled us to start exploring interfaces for gaming, and present informal observations of players at several demonstrations. Finally, we present some ideas for AR games in the area of strategy and role playing games.", "title": "" }, { "docid": "98b603ed5be37165cc22da7650023d7d", "text": "One reason that word learning presents a challenge for children is because pairings between word forms and meanings are arbitrary conventions that children must learn via observation - e.g., the fact that \"shovel\" labels shovels. The present studies explore cases in which children might bypass observational learning and spontaneously infer new word meanings: By exploiting the fact that many words are flexible and systematically encode multiple, related meanings. For example, words like shovel and hammer are nouns for instruments, and verbs for activities involving those instruments. The present studies explored whether 3- to 5-year-old children possess semantic generalizations about lexical flexibility, and can use these generalizations to infer new word meanings: Upon learning that dax labels an activity involving an instrument, do children spontaneously infer that dax can also label the instrument itself? Across four studies, we show that at least by age four, children spontaneously generalize instrument-activity flexibility to new words. Together, our findings point to a powerful way in which children may build their vocabulary, by leveraging the fact that words are linked to multiple meanings in systematic ways.", "title": "" }, { "docid": "d71040311b8753299377b02023ba5b4c", "text": "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.", "title": "" }, { "docid": "cc8b634daad1088aa9f4c43222fab279", "text": "In this paper, a comparision between the conventional LSTM network and the one-dimensional grid LSTM network applied on single word speech recognition is conducted. The performance of the networks are measured in terms of accuracy and training time. The conventional LSTM model is the current state of the art method to model speech recognition. However, the grid LSTM architecture has proven to be successful in solving other emperical tasks such as translation and handwriting recognition. When implementing the two networks in the same training framework with the same training data of single word audio files, the conventional LSTM network yielded an accuracy rate of 64.8 % while the grid LSTM network yielded an accuracy rate of 65.2 %. Statistically, there was no difference in the accuracy rate between the models. In addition, the conventional LSTM network took 2 % longer to train. However, this difference in training time is considered to be of little significance when tralnslating it to absolute time. Thus, it can be concluded that the one-dimensional grid LSTM model performs just as well as the conventional one.", "title": "" }, { "docid": "157c084aa6622c74449f248f98314051", "text": "A magnetically-tuned multi-mode VCO featuring an ultra-wide frequency tuning range is presented. By changing the magnetic coupling coefficient between the primary and secondary coils in the transformer tank, the frequency tuning range of a dual-band VCO is greatly increased to continuously cover the whole E-band. Fabricated in a 65-nm CMOS process, the presented VCO measures a tuning range of 44.2% from 57.5 to 90.1 GHz while consuming 7mA to 9mA at 1.2V supply. The measured phase noises at 10MHz offset from carrier frequencies of 72.2, 80.5 and 90.1 GHz are -111.8, -108.9 and -105 dBc/Hz, respectively, which corresponds to a FOMT between -192.2 and -184.2dBc/Hz.", "title": "" }, { "docid": "14d9343bbe4ad2dd4c2c27cb5d6795cd", "text": "In the paper a method of translation applied in a new system TGT is discussed. TGT translates texts written in Polish into corresponding utterances in the Polish sign language. Discussion is focused on text-into-text translation phase. Proper translation is done on the level of a predicative representation of the sentence. The representation is built on the basis of syntactic graph that depicts the composition and mutual connections of syntactic groups, which exist in the sentence and are identified at the syntactic analysis stage. An essential element of translation process is complementing the initial predicative graph with nodes, which correspond to lacking sentence members. The method acts for primitive sentences as well as for compound ones, with some limitations, however. A translation example is given which illustrates main transformations done on the linguistic level. It is complemented by samples of images generated by the animating part of the system.", "title": "" }, { "docid": "2438a082eac9852d3dbcea22aa0402b2", "text": "Importance\nDietary modification remains key to successful weight loss. Yet, no one dietary strategy is consistently superior to others for the general population. Previous research suggests genotype or insulin-glucose dynamics may modify the effects of diets.\n\n\nObjective\nTo determine the effect of a healthy low-fat (HLF) diet vs a healthy low-carbohydrate (HLC) diet on weight change and if genotype pattern or insulin secretion are related to the dietary effects on weight loss.\n\n\nDesign, Setting, and Participants\nThe Diet Intervention Examining The Factors Interacting with Treatment Success (DIETFITS) randomized clinical trial included 609 adults aged 18 to 50 years without diabetes with a body mass index between 28 and 40. The trial enrollment was from January 29, 2013, through April 14, 2015; the date of final follow-up was May 16, 2016. Participants were randomized to the 12-month HLF or HLC diet. The study also tested whether 3 single-nucleotide polymorphism multilocus genotype responsiveness patterns or insulin secretion (INS-30; blood concentration of insulin 30 minutes after a glucose challenge) were associated with weight loss.\n\n\nInterventions\nHealth educators delivered the behavior modification intervention to HLF (n = 305) and HLC (n = 304) participants via 22 diet-specific small group sessions administered over 12 months. The sessions focused on ways to achieve the lowest fat or carbohydrate intake that could be maintained long-term and emphasized diet quality.\n\n\nMain Outcomes and Measures\nPrimary outcome was 12-month weight change and determination of whether there were significant interactions among diet type and genotype pattern, diet and insulin secretion, and diet and weight loss.\n\n\nResults\nAmong 609 participants randomized (mean age, 40 [SD, 7] years; 57% women; mean body mass index, 33 [SD, 3]; 244 [40%] had a low-fat genotype; 180 [30%] had a low-carbohydrate genotype; mean baseline INS-30, 93 μIU/mL), 481 (79%) completed the trial. In the HLF vs HLC diets, respectively, the mean 12-month macronutrient distributions were 48% vs 30% for carbohydrates, 29% vs 45% for fat, and 21% vs 23% for protein. Weight change at 12 months was -5.3 kg for the HLF diet vs -6.0 kg for the HLC diet (mean between-group difference, 0.7 kg [95% CI, -0.2 to 1.6 kg]). There was no significant diet-genotype pattern interaction (P = .20) or diet-insulin secretion (INS-30) interaction (P = .47) with 12-month weight loss. There were 18 adverse events or serious adverse events that were evenly distributed across the 2 diet groups.\n\n\nConclusions and Relevance\nIn this 12-month weight loss diet study, there was no significant difference in weight change between a healthy low-fat diet vs a healthy low-carbohydrate diet, and neither genotype pattern nor baseline insulin secretion was associated with the dietary effects on weight loss. In the context of these 2 common weight loss diet approaches, neither of the 2 hypothesized predisposing factors was helpful in identifying which diet was better for whom.\n\n\nTrial Registration\nclinicaltrials.gov Identifier: NCT01826591.", "title": "" }, { "docid": "bb43c98d05f3844354862d39f6fa1d2d", "text": "There are always frustrations for drivers in finding parking spaces and being protected from auto theft. In this paper, to minimize the drivers' hassle and inconvenience, we propose a new intelligent secure privacy-preserving parking scheme through vehicular communications. The proposed scheme is characterized by employing parking lot RSUs to surveil and manage the whole parking lot and is enabled by communication between vehicles and the RSUs. Once vehicles that are equipped with wireless communication devices, which are also known as onboard units, enter the parking lot, the RSUs communicate with them and provide the drivers with real-time parking navigation service, secure intelligent antitheft protection, and friendly parking information dissemination. In addition, the drivers' privacy is not violated. Performance analysis through extensive simulations demonstrates the efficiency and practicality of the proposed scheme.", "title": "" }, { "docid": "bee4d4ba947d87b86abc02852c39d2b3", "text": "Aim\nThe study assessed the documentation of nursing care before, during and after the Standardized Nursing Language Continuing Education Programme (SNLCEP). It evaluates the differences in documentation of nursing care in different nursing specialty areas and assessed the influence of work experience on the quality of documentation of nursing care with a view to provide information on documentation of nursing care. The instrument used was an adapted scoring guide for nursing diagnosis, nursing intervention and nursing outcome (Q-DIO).\n\n\nDesign\nRetrospective record reviews design was used.\n\n\nMethods\nA total of 270 nursing process booklets formed the sample size. From each ward, 90 booklets were selected in this order: 30 booklets before the SNLCEP, 30 booklets during SNLCEP and 30 booklets after SNLCEP.\n\n\nResults\nOverall, the study concluded that the SNLCEP had a significant effect on the quality of documentation of nursing care using Standardized Nursing Languages.", "title": "" }, { "docid": "938e44b4c03823584d9f9fb9209a9b1e", "text": "The competitive MNIST handwritten digit recognition benchmark has a long history of broken records since 1998. The most recent substantial improvement by others dates back 7 years (error rate 0.4%) . Recently we were able to significantly improve this result, using graphics cards to greatly speed up training of simple but deep MLPs, which achieved 0.35%, outperforming all the previous more complex methods. Here we report another substantial improvement: 0.31% obtained using a committee of MLPs.", "title": "" }, { "docid": "fe687739626916780ff22d95cf89f758", "text": "In this paper, we address the problem of jointly summarizing large sets of Flickr images and YouTube videos. Starting from the intuition that the characteristics of the two media types are different yet complementary, we develop a fast and easily-parallelizable approach for creating not only high-quality video summaries but also novel structural summaries of online images as storyline graphs. The storyline graphs can illustrate various events or activities associated with the topic in a form of a branching network. The video summarization is achieved by diversity ranking on the similarity graphs between images and video frames. The reconstruction of storyline graphs is formulated as the inference of sparse time-varying directed graphs from a set of photo streams with assistance of videos. For evaluation, we collect the datasets of 20 outdoor activities, consisting of 2.7M Flickr images and 16K YouTube videos. Due to the large-scale nature of our problem, we evaluate our algorithm via crowdsourcing using Amazon Mechanical Turk. In our experiments, we demonstrate that the proposed joint summarization approach outperforms other baselines and our own methods using videos or images only.", "title": "" }, { "docid": "0b61d0ffe709d29e133ead6d6211a003", "text": "The hypothesis that Enterococcus faecalis resists common intracanal medications by forming biofilms was tested. E. faecalis colonization of 46 extracted, medicated roots was observed with scanning electron microscopy (SEM) and scanning confocal laser microscopy. SEM detected colonization of root canals medicated with calcium hydroxide points and the positive control within 2 days. SEM detected biofilms in canals medicated with calcium hydroxide paste in an average of 77 days. Scanning confocal laser microscopy analysis of two calcium hydroxide paste medicated roots showed viable colonies forming in a root canal infected for 86 days, whereas in a canal infected for 160 days, a mushroom-shape typical of a biofilm was observed. Analysis by sodium dodecyl sulfate polyacrylamide gel electrophoresis showed no differences between the protein profiles of bacteria in free-floating (planktonic) and inoculum cultures. Analysis of biofilm bacteria was inconclusive. These observations support potential E. faecalis biofilm formation in vivo in medicated root canals.", "title": "" } ]
scidocsrr
1e0b95ca31bb557a980e9560c4e479c5
Trilinear Tensor: The Fundamental Construct of Multiple-view Geometry and Its Applications
[ { "docid": "5aa5ebf7727ea1b5dcf4d8f74b13cb29", "text": "Visual object recognition requires the matching of an image with a set of models stored in memory. In this paper, we propose an approach to recognition in which a 3-D object is represented by the linear combination of 2-D images of the object. IfJLk{M1,.” .Mk} is the set of pictures representing a given object and P is the 2-D image of an object to be recognized, then P is considered to be an instance of M if P= C~=,aiMi for some constants (pi. We show that this approach handles correctly rigid 3-D transformations of objects with sharp as well as smooth boundaries and can also handle nonrigid transformations. The paper is divided into two parts. In the first part, we show that the variety of views depicting the same object under different transformations can often be expressed as the linear combinations of a small number of views. In the second part, we suggest how this linear combination property may be used in the recognition process.", "title": "" } ]
[ { "docid": "2476c8b7f6fe148ab20c29e7f59f5b23", "text": "A high temperature, wire-bondless power electronics module with a double-sided cooling capability is proposed and successfully fabricated. In this module, a low-temperature co-fired ceramic (LTCC) substrate was used as the dielectric and chip carrier. Conducting vias were created on the LTCC carrier to realize the interconnection. The absent of a base plate reduced the overall thermal resistance and also improved the fatigue life by eliminating a large-area solder layer. Nano silver paste was used to attach power devices to the DBC substrate as well as to pattern the gate connection. Finite element simulations were used to compare the thermal performance to several reported double-sided power modules. Electrical measurements of a SiC MOSFET and SiC diode switching position demonstrated the functionality of the module.", "title": "" }, { "docid": "65ed76ddd6f7fd0aea717d2e2643dd16", "text": "In semi-supervised learning, a number of labeled examples are usually required for training an initial weakly useful predictor which is in turn used for exploiting the unlabeled examples. However, in many real-world applications there may exist very few labeled training examples, which makes the weakly useful predictor difficult to generate, and therefore these semisupervised learning methods cannot be applied. This paper proposes a method working under a two-view setting. By taking advantages of the correlations between the views using canonical component analysis, the proposed method can perform semi-supervised learning with only one labeled training example. Experiments and an application to content-based image retrieval validate the effectiveness of the proposed method.", "title": "" }, { "docid": "8e0b61e82179cc39b4df3d06448a3d14", "text": "The antibacterial activity and antioxidant effect of the compounds α-terpineol, linalool, eucalyptol and α-pinene obtained from essential oils (EOs), against pathogenic and spoilage forming bacteria were determined. The antibacterial activities of these compounds were observed in vitro on four Gram-negative and three Gram-positive strains. S. putrefaciens was the most resistant bacteria to all tested components, with MIC values of 2% or higher, whereas E. coli O157:H7 was the most sensitive strain among the tested bacteria. Eucalyptol extended the lag phase of S. Typhimurium, E. coli O157:H7 and S. aureus at the concentrations of 0.7%, 0.6% and 1%, respectively. In vitro cell growth experiments showed the tested compounds had toxic effects on all bacterial species with different level of potency. Synergistic and additive effects were observed at least one dose pair of combination against S. Typhimurium, E. coli O157:H7 and S. aureus, however antagonistic effects were not found in these combinations. The results of this first study are encouraging for further investigations on mechanisms of antimicrobial activity of these EO components.", "title": "" }, { "docid": "204ad3064d559c345caa2c6d1a140582", "text": "In this paper, a face recognition method based on Convolution Neural Network (CNN) is presented. This network consists of three convolution layers, two pooling layers, two full-connected layers and one Softmax regression layer. Stochastic gradient descent algorithm is used to train the feature extractor and the classifier, which can extract the facial features and classify them automatically. The Dropout method is used to solve the over-fitting problem. The Convolution Architecture For Feature Extraction framework (Caffe) is used during the training and testing process. The face recognition rate of the ORL face database and AR face database based on this network is 99.82% and 99.78%.", "title": "" }, { "docid": "c8948a93e138ca0ac8cae3247dc9c81a", "text": "Sharpness is an important determinant in visual assessment of image quality. The human visual system is able to effortlessly detect blur and evaluate sharpness of visual images, but the underlying mechanism is not fully understood. Existing blur/sharpness evaluation algorithms are mostly based on edge width, local gradient, or energy reduction of global/local high frequency content. Here we understand the subject from a different perspective, where sharpness is identified as strong local phase coherence (LPC) near distinctive image features evaluated in the complex wavelet transform domain. Previous LPC computation is restricted to be applied to complex coefficients spread in three consecutive dyadic scales in the scale-space. Here we propose a flexible framework that allows for LPC computation in arbitrary fractional scales. We then develop a new sharpness assessment algorithm without referencing the original image. We use four subject-rated publicly available image databases to test the proposed algorithm, which demonstrates competitive performance when compared with state-of-the-art algorithms.", "title": "" }, { "docid": "34bd41f7384d6ee4d882a39aec167b3e", "text": "This paper presents a robust feedback controller for ball and beam system (BBS). The BBS is a nonlinear system in which a ball has to be balanced on a particular beam position. The proposed nonlinear controller designed for the BBS is based upon Backstepping control technique which guarantees the boundedness of tracking error. To tackle the unknown disturbances, an external disturbance estimator (EDE) has been employed. The stability analysis of the overall closed loop robust control system has been worked out in the sense of Lyapunov theory. Finally, the simulation studies have been done to demonstrate the suitability of proposed scheme.", "title": "" }, { "docid": "bc8950644ded24618a65c4fcef302044", "text": "Child maltreatment is a pervasive problem in our society that has long-term detrimental consequences to the development of the affected child such as future brain growth and functioning. In this paper, we surveyed empirical evidence on the neuropsychological effects of child maltreatment, with a special emphasis on emotional, behavioral, and cognitive process–response difficulties experienced by maltreated children. The alteration of the biochemical stress response system in the brain that changes an individual’s ability to respond efficiently and efficaciously to future stressors is conceptualized as the traumatic stress response. Vulnerable brain regions include the hypothalamic–pituitary–adrenal axis, the amygdala, the hippocampus, and prefrontal cortex and are linked to children’s compromised ability to process both emotionally-laden and neutral stimuli in the future. It is suggested that information must be garnered from varied literatures to conceptualize a research framework for the traumatic stress response in maltreated children. This research framework suggests an altered developmental trajectory of information processing and emotional dysregulation, though much debate still exists surrounding the correlational nature of empirical studies, the potential of resiliency following childhood trauma, and the extent to which early interventions may facilitate recovery.", "title": "" }, { "docid": "f4baeef21537029511a59edbbe7f2741", "text": "Software testing requires the use of a model to guide such efforts as test selection and test verification. Often, such models are implicit, existing only in the head of a human tester, applying test inputs in an ad hoc fashion. The mental model testers build encapsulates application behavior, allowing testers to understand the application’s capabilities and more effectively test its range of possible behaviors. When these mental models are written down, they become sharable, reusable testing artifacts. In this case, testers are performing what has become to be known as model-based testing. Model-based testing has recently gained attention with the popularization of models (including UML) in software design and development. There are a number of models of software in use today, a few of which make good models for testing. This paper introduces model-based testing and discusses its tasks in general terms with finite state models (arguably the most popular software models) as examples. In addition, advantages, difficulties, and shortcoming of various model-based approaches are concisely presented. Finally, we close with a discussion of where model-based testing fits in the present and future of software engineering.", "title": "" }, { "docid": "dc2e98a7fbaf8b3dedd6eaf34730a9d3", "text": "Cultural issues impact on health care, including individuals’ health care behaviours and beliefs. Hasidic Jews, with their strict religious observance, emphasis on kabbalah, cultural insularity and spiritual leader, their Rebbe, comprise a distinct cultural group. The reviewed studies reveal that Hasidic Jews may seek spiritual healing and incorporate religion in their explanatory models of illness; illness attracts stigma; psychiatric patients’ symptomatology may have religious content; social and cultural factors may challenge health care delivery. The extant research has implications for clinical practice. However, many studies exhibited methodological shortcomings with authors providing incomplete analyses of the extent to which findings are authentically Hasidic. High-quality research is required to better inform the provision of culturally competent care to Hasidic patients.", "title": "" }, { "docid": "17b66811d671fbe77a935a9028c954ce", "text": "Research in management information systems often examines computer literacy as an independent variable. Study subjects may be asked to self-report their computer literacy and that literacy is then utilized as a research variable. However, it is not known whether self-reported computer literacy is a valid measure of a subject’s actual computer literacy. The research presented in this paper examined the question of whether self-reported computer literacy can be a reliable indication of actual computer literacy and therefore valid for use in empirical research. Study participants were surveyed and asked to self-report their level of computer literacy. Following, subjects were tested to determine an objective measure of computer literacy. The data analysis determined that self-reported computer literacy is not reliable. Results of this research are important for academic programs, for businesses, and for future empirical studies in management information systems.", "title": "" }, { "docid": "ac29c2091012ccfac993cc706eadbf3c", "text": "In this study 40 genotypes in a randomized complete block design with three replications for two years were planted in the region of Ardabil. The yield related data and its components over the years of the analysis of variance were combined.Results showed that there was a significant difference between genotypes and genotype interaction in the environment. MLR and ANN methods were used to predict yield in barley. The fitted model in a yield predicting linear regression method was as follows: Reg = 1.75 + 0.883 X1 + 0.05017X2 +1.984X3 Also, yield prediction based on multi-layer neural network (ANN) using the Matlab Perceptron type software with one hidden layer including 15 neurons and using algorithm after error propagation learning method and hyperbolic tangent function was implemented, in both methods absolute values of relative error as a deviation index in order to estimate and using duad t test of mean deviation index of the two estimates was examined. Results showed that in the ANN technique the mean deviation index of estimation significantly was one-third (1 / 3) of its rate in the MLR, because there was a significant interaction between genotype and environment and its impact on estimation by MLR method.Therefore, when the genotype environment interaction is significant, in the yield prediction in instead of the regression is recommended of a neural network approach due to high yield and more velocity in the estimation to be used.", "title": "" }, { "docid": "3a6a97b2705d90b031ab1e065281465b", "text": "Common (Cinnamomum verum, C. zeylanicum) and cassia (C. aromaticum) cinnamon have a long history of use as spices and flavouring agents. A number of pharmacological and clinical effects have been observed with their use. The objective of this study was to systematically review the scientific literature for preclinical and clinical evidence of safety, efficacy, and pharmacological activity of common and cassia cinnamon. Using the principles of evidence-based practice, we searched 9 electronic databases and compiled data according to the grade of evidence found. One pharmacological study on antioxidant activity and 7 clinical studies on various medical conditions were reported in the scientific literature including type 2 diabetes (3), Helicobacter pylori infection (1), activation of olfactory cortex of the brain (1), oral candidiasis in HIV (1), and chronic salmonellosis (1). Two of 3 randomized clinical trials on type 2 diabetes provided strong scientific evidence that cassia cinnamon demonstrates a therapeutic effect in reducing fasting blood glucose by 10.3%–29%; the third clinical trial did not observe this effect. Cassia cinnamon, however, did not have an effect at lowering glycosylated hemoglobin (HbA1c). One randomized clinical trial reported that cassia cinnamon lowered total cholesterol, low-density lipoprotein cholesterol, and triglycerides; the other 2 trials, however, did not observe this effect. There was good scientific evidence that a species of cinnamon was not effective at eradicating H. pylori infection. Common cinnamon showed weak to very weak evidence of efficacy in treating oral candidiasis in HIV patients and chronic", "title": "" }, { "docid": "e971fd6eac427df9a68f10cad490b2db", "text": "We present a corpus of 5,000 richly annotated abstracts of medical articles describing clinical randomized controlled trials. Annotations include demarcations of text spans that describe the Patient population enrolled, the Interventions studied and to what they were Compared, and the Outcomes measured (the 'PICO' elements). These spans are further annotated at a more granular level, e.g., individual interventions within them are marked and mapped onto a structured medical vocabulary. We acquired annotations from a diverse set of workers with varying levels of expertise and cost. We describe our data collection process and the corpus itself in detail. We then outline a set of challenging NLP tasks that would aid searching of the medical literature and the practice of evidence-based medicine.", "title": "" }, { "docid": "a55224bcd659f67314e7ef31e0fd0756", "text": "Dopamine neurons located in the midbrain play a role in motivation that regulates approach behavior (approach motivation). In addition, activation and inactivation of dopamine neurons regulate mood and induce reward and aversion, respectively. Accumulating evidence suggests that such motivational role of dopamine neurons is not limited to those located in the ventral tegmental area, but also in the substantia nigra. The present paper reviews previous rodent work concerning dopamine's role in approach motivation and the connectivity of dopamine neurons, and proposes two working models: One concerns the relationship between extracellular dopamine concentration and approach motivation. High, moderate and low concentrations of extracellular dopamine induce euphoric, seeking and aversive states, respectively. The other concerns circuit loops involving the cerebral cortex, basal ganglia, thalamus, epithalamus, and midbrain through which dopaminergic activity alters approach motivation. These models should help to generate hypothesis-driven research and provide insights for understanding altered states associated with drugs of abuse and affective disorders.", "title": "" }, { "docid": "af836023436eaa65ef55f9928312e73f", "text": "We present a probabilistic approach to learning a Gaussian Process classifier in the presence of unlabeled data. Our approach involves a “null category noise model” (NCNM) inspired by ordered categorical noise models. The noise model reflects an assumption that the data density is lower between the class-conditional densities. We illustrate our approach on a toy problem and present comparative results for the semi-supervised classification of handwritten digits.", "title": "" }, { "docid": "43f2dcf2f2260ff140e20380d265105b", "text": "As ontologies are the backbone of the Semantic Web, they attract much attention from researchers and engineers in many domains. This results in an increasing number of ontologies and semantic web applications. The number and complexity of such ontologies makes it hard for developers of ontologies and tools to decide which ontologies to use and reuse. To simplify the problem, a modularization algorithm can be used to partition ontologies into sets of modules. In order to evaluate the quality of modularization, we propose a new evaluation metric that quantifies the goodness of ontology modularization. In particular, we investigate the ontology module homogeneity, which assesses module cohesion, and the ontology module heterogeneity, which appraises module coupling. The experimental results demonstrate that the proposed metric is effective.", "title": "" }, { "docid": "d74131a431ca54f45a494091e576740c", "text": "In today’s highly competitive business environments with shortened product and technology life cycle, it is critical for software industry to continuously innovate. This goal can be achieved by developing a better understanding and control of the activities and determinants of innovation. Innovation measurement initiatives assess innovation capability, output and performance to help develop such an understanding. This study explores various aspects relevant to innovation measurement ranging from definitions, measurement frameworks and metrics that have been proposed in literature and used in practice. A systematic literature review followed by an online questionnaire and interviews with practitioners and academics were employed to identify a comprehensive definition of innovation that can be used in software industry. The metrics for the evaluation of determinants, inputs, outputs and performance were also aggregated and categorised. Based on these findings, a conceptual model of the key measurable elements of innovation was constructed from the findings of the systematic review. The model was further refined after feedback from academia and industry through interviews.", "title": "" }, { "docid": "8a32bdadcaa2c94f83e95c19e400835b", "text": "Create a short summary of your paper (200 words), double-spaced. Your summary will say something like: In this action research study of my classroom of 7 grade mathematics, I investigated ______. I discovered that ____________. As a result of this research, I plan to ___________. You now begin your paper. Pages should be numbered, with the first page of text following the abstract as page one. (In Microsoft Word: after your abstract, rather than inserting a “page break” insert a “section break” to start on the next page; this will allow you to start the 3 page being numbered as page 1). You should divide this report of your research into sections. We should be able to identity the following sections and you may use these headings (headings should be bold, centered, and capitalized). Consider the page length to be a minimum.", "title": "" }, { "docid": "c0a51f27931d8314b73a7de969bdfb08", "text": "Organizations need practical security benchmarking tools in order to plan effective security strategies. This paper explores a number of techniques that can be used to measure security within an organization. It proposes a benchmarking methodology that produces results that are of strategic importance to both decision makers and technology implementers.", "title": "" }, { "docid": "27c2c015c6daaac99b34d00845ec646c", "text": "Virtual worlds, such as Second Life and Everquest, have grown into virtual game communities that have economic potential. In such communities, virtual items are bought and sold between individuals for real money. The study detailed in this paper aims to identify, model and test the individual determinants for the decision to purchase virtual items within virtual game communities. A comprehensive understanding of these key determinants will enable researchers to further the understanding of player behavior towards virtual item transactions, which are an important aspect of the economic system within virtual games and often raise one of the biggest challenges for game community operators. A model will be developed via a mixture of new constructs and established theories, including the theory of planned behavior (TPB), the technology acceptance model (TAM), trust theory and unified theory of acceptance and use of technology (UTAUT). For this purpose the research uses a sequential, multi-method approach in two phases: combining the use of inductive, qualitative data from focus groups and expert interviews in phase one; and deductive, quantitative survey data in phase two. The final model will hopefully provide an impetus to further research in the area of virtual game community transaction behavior. The paper rounds off with a discussion of further research challenges in this area over the next seven years.", "title": "" } ]
scidocsrr
b73d93873caf89e0be871c66a216b066
38 GHz and 60 GHz angle-dependent propagation for cellular & peer-to-peer wireless communications
[ { "docid": "c67010d61ec7f9ea839bbf7d2dce72a1", "text": "Almost all cellular mobile communications including first generation analog systems, second generation digital systems, third generation WCDMA, and fourth generation OFDMA systems use Ultra High Frequency (UHF) band of radio spectrum with frequencies in the range of 300MHz-3GHz. This band of spectrum is becoming increasingly crowded due to spectacular growth in mobile data and other related services. More recently, there have been proposals to explore mmWave spectrum (3-300GHz) for commercial mobile applications due to its unique advantages such as spectrum availability and small component sizes. In this paper, we discuss system design aspects such as antenna array design, base station and mobile station requirements. We also provide system performance and SINR geometry results to demonstrate the feasibility of an outdoor mmWave mobile broadband communication system. We note that with adaptive antenna array beamforming, multi-Gbps data rates can be supported for mobile cellular deployments.", "title": "" } ]
[ { "docid": "36460eda2098bdcf3810828f54ee7d2b", "text": "[This corrects the article on p. 662 in vol. 60, PMID: 27729694.].", "title": "" }, { "docid": "7ad0a3e21de90ae5578626c12b42e666", "text": "Social media are a primary means for travelers to connect with each other and plan trips. They can also help tourism suppliers (e.g., by providing relevant information), thus overcoming the shortcomings of traditional information sources. User-generated content from social media has already been used in many studies as a primary information source. However, the quality of information derived thus far remains largely unclear. This study assesses the quality of macro-level information on the spatio-temporal distribution of tourism derived from online travel reviews in social media in terms of completeness, timeliness, and accuracy. We found that information quality increased from 2000 to 2009 as online travel reviews increasingly covered more countries, became available earlier than statistics reported by the United Nations World Tourism Organization (UNWTO), were highly correlated with the UNWTO statistics. We conclude that social media are a good information source for macro-level spatio-temporal tourism information and could be used, for example, to estimate tourism figures.", "title": "" }, { "docid": "3e3fd0a457f9469e490de9ea40c04c61", "text": "Thousands of historically revealing cuneiform clay tablets, which were inscribed in Mesopotamia millenia ago, still exist today. Visualizing cuneiform writing is important when deciphering what is written on the tablets. It is also important when reproducing the tablets in papers and books. Unfortunately, scholars have found photographs to be an inadequate visualization tool, for two reasons. First, the text wraps around the sides of some tablets, so a single viewpoint is insufficient. Second, a raking light will illuminate some textual features, but will leave others shadowed or invisible because they are either obscured by features on the tablet or are nearly aligned with the lighting direction. We present solutions to these problems by first creating a high-resolution 3D computer model from laser range data, then unwrapping and flattening the inscriptions on the model to a plane, allowing us to represent them as a scalar displacement map, and finally, rendering this map non-photorealistically using accessibility and curvature coloring. The output of this semiautomatic process enables all of a tablet’s text to be perceived in a single concise image. Our technique can also be applied to other types of inscribed surfaces, including bas-reliefs.", "title": "" }, { "docid": "c4be39977487cdebc8127650c8eda433", "text": "Unfavorable wake and separated flow from the hull might cause a dramatic decay of the propeller performance in single-screw propelled vessels such as tankers, bulk carriers and containers. For these types of vessels, special attention has to be paid to the design of the stern region, the occurrence of a good flow towards the propeller and rudder being necessary to avoid separation and unsteady loads on the propeller blades and, thus, to minimize fuel consumption and the risk for cavitation erosion and vibrations. The present work deals with the analysis of the propeller inflow in a single-screw chemical tanker vessel affected by massive flow separation in the stern region. Detailed flow measurements by Laser Doppler Velocimetry (LDV) were performed in the propeller region at model scale, in the Large Circulating Water Channel of CNR-INSEAN. Tests were undertaken with and without propeller in order to investigate its effect on the inflow characteristics and the separation mechanisms. In this regard, the study concerned also a phase locked analysis of the propeller perturbation at different distances upstream of the propulsor. The study shows the effectiveness of the 3 order statistical moment (i.e. skewness) for describing the topology of the wake and accurately identifying the portion affected by the detached flow.", "title": "" }, { "docid": "9d273b1118940525d564edec073a9dfa", "text": "A set of 1.4 million biomedical papers was analyzed with regards to how often articles are mentioned on Twitter or saved by users on Mendeley. While Twitter is a microblogging platform used by a general audience to distribute information, Mendeley is a reference manager targeted at an academic user group to organize scholarly literature. Both platforms are used as sources for so-called “altmetrics” to measure a new kind of research impact. This analysis shows in how far they differ and compare to traditional citation impact metrics based on a large set of PubMed papers.", "title": "" }, { "docid": "bb2ad600e0e90a1a349e39ce0f097277", "text": "Tongue drive system (TDS) is a tongue-operated, minimally invasive, unobtrusive, and wireless assistive technology (AT) that infers users' intentions by detecting their voluntary tongue motion and translating them into user-defined commands. Here we present the new intraoral version of the TDS (iTDS), which has been implemented in the form of a dental retainer. The iTDS system-on-a-chip (SoC) features a configurable analog front-end (AFE) that reads the magnetic field variations inside the mouth from four 3-axial magnetoresistive sensors located at four corners of the iTDS printed circuit board (PCB). A dual-band transmitter (Tx) on the same chip operates at 27 and 432 MHz in the Industrial/Scientific/Medical (ISM) band to allow users to switch in the presence of external interference. The Tx streams the digitized samples to a custom-designed TDS universal interface, built from commercial off-the-shelf (COTS) components, which delivers the iTDS data to other devices such as smartphones, personal computers (PC), and powered wheelchairs (PWC). Another key block on the iTDS SoC is the power management integrated circuit (PMIC), which provides individually regulated and duty-cycled 1.8 V supplies for sensors, AFE, Tx, and digital control blocks. The PMIC also charges a 50 mAh Li-ion battery with constant current up to 4.2 V, and recovers data and clock to update its configuration register through a 13.56 MHz inductive link. The iTDS SoC has been implemented in a 0.5-μm standard CMOS process and consumes 3.7 mW on average.", "title": "" }, { "docid": "672c11254309961fe02bc48827f8949e", "text": "HIV-1 integration into the host genome favors actively transcribed genes. Prior work indicated that the nuclear periphery provides the architectural basis for integration site selection, with viral capsid-binding host cofactor CPSF6 and viral integrase-binding cofactor LEDGF/p75 contributing to selection of individual sites. Here, by investigating the early phase of infection, we determine that HIV-1 traffics throughout the nucleus for integration. CPSF6-capsid interactions allow the virus to bypass peripheral heterochromatin and penetrate the nuclear structure for integration. Loss of interaction with CPSF6 dramatically alters virus localization toward the nuclear periphery and integration into transcriptionally repressed lamina-associated heterochromatin, while loss of LEDGF/p75 does not significantly affect intranuclear HIV-1 localization. Thus, CPSF6 serves as a master regulator of HIV-1 intranuclear localization by trafficking viral preintegration complexes away from heterochromatin at the periphery toward gene-dense chromosomal regions within the nuclear interior.", "title": "" }, { "docid": "31338a16eca7c0f60b789c38f2774816", "text": "As a promising area in artificial intelligence, a new learning paradigm, called Small Sample Learning (SSL), has been attracting prominent research attention in the recent years. In this paper, we aim to present a survey to comprehensively introduce the current techniques proposed on this topic. Specifically, current SSL techniques can be mainly divided into two categories. The first category of SSL approaches can be called “concept learning”, which emphasizes learning new concepts from only few related observations. The purpose is mainly to simulate human learning behaviors like recognition, generation, imagination, synthesis and analysis. The second category is called “experience learning”, which usually co-exists with the large sample learning manner of conventional machine learning. This category mainly focuses on learning with insufficient samples, and can also be called small data learning in some literatures. More extensive surveys on both categories of SSL techniques are introduced and some neuroscience evidences are provided to clarify the rationality of the entire SSL regime, and the relationship with human learning process. Some discussions on the main challenges and possible future research directions along this line are also presented.", "title": "" }, { "docid": "22b86cdb894eb6a4118d574822b8f952", "text": "This paper addresses view-invariant object detection and pose estimation from a single image. While recent work focuses on object-centered representations of point-based object features, we revisit the viewer-centered framework, and use image contours as basic features. Given training examples of arbitrary views of an object, we learn a sparse object model in terms of a few view-dependent shape templates. The shape templates are jointly used for detecting object occurrences and estimating their 3D poses in a new image. Instrumental to this is our new mid-level feature, called bag of boundaries (BOB), aimed at lifting from individual edges toward their more informative summaries for identifying object boundaries amidst the background clutter. In inference, BOBs are placed on deformable grids both in the image and the shape templates, and then matched. This is formulated as a convex optimization problem that accommodates invariance to non-rigid, locally affine shape deformations. Evaluation on benchmark datasets demonstrates our competitive results relative to the state of the art.", "title": "" }, { "docid": "2e9b98fbb1fa15020b374dbd48fb5adc", "text": "Recently, bipolar fuzzy sets have been studied and applied a bit enthusiastically and a bit increasingly. In this paper we prove that bipolar fuzzy sets and [0,1](2)-sets (which have been deeply studied) are actually cryptomorphic mathematical notions. Since researches or modelings on real world problems often involve multi-agent, multi-attribute, multi-object, multi-index, multi-polar information, uncertainty, or/and limit process, we put forward (or highlight) the notion of m-polar fuzzy set (actually, [0,1] (m)-set which can be seen as a generalization of bipolar fuzzy set, where m is an arbitrary ordinal number) and illustrate how many concepts have been defined based on bipolar fuzzy sets and many results which are related to these concepts can be generalized to the case of m-polar fuzzy sets. We also give examples to show how to apply m-polar fuzzy sets in real world problems.", "title": "" }, { "docid": "ad6dc9f74e0fa3c544c4123f50812e14", "text": "An ultra-wideband transition from microstrip to stripline in PCB technology is presented applying only through via holes for simple fabrication. The design is optimized using full-wave EM simulations. A prototype is manufactured and measured achieving a return loss better than 8.7dB and an insertion loss better than 1.2 dB in the FCC frequency range. A meander-shaped delay line in stripline technique is presented as an example of application.", "title": "" }, { "docid": "c5113ff741d9e656689786db10484a07", "text": "Pulmonary administration of drugs presents several advantages in the treatment of many diseases. Considering local and systemic delivery, drug inhalation enables a rapid and predictable onset of action and induces fewer side effects than other routes of administration. Three main inhalation systems have been developed for the aerosolization of drugs; namely, nebulizers, pressurized metered-dose inhalers (MDIs) and dry powder inhalers (DPIs). The latter are currently the most convenient alternative as they are breath-actuated and do not require the use of any propellants. The deposition site in the respiratory tract and the efficiency of inhaled aerosols are critically influenced by the aerodynamic diameter, size distribution, shape and density of particles. In the case of DPIs, since micronized particles are generally very cohesive and exhibit poor flow properties, drug particles are usually blended with coarse and fine carrier particles. This increases particle aerodynamic behavior and flow properties of the drugs and ensures accurate dosage of active ingredients. At present, particles with controlled properties are obtained by milling, spray drying or supercritical fluid techniques. Several excipients such as sugars, lipids, amino acids, surfactants, polymers and absorption enhancers have been tested for their efficacy in improving drug pulmonary administration. The purpose of this article is to describe various observations that have been made in the field of inhalation product development, especially for the dry powder inhalation formulation, and to review the use of various additives, their effectiveness and their potential toxicity for pulmonary administration.", "title": "" }, { "docid": "fc32d0734ea83a4252339c6a2f98b0ee", "text": "The security of Android depends on the timely delivery of updates to fix critical vulnerabilities. In this paper we map the complex network of players in the Android ecosystem who must collaborate to provide updates, and determine that inaction by some manufacturers and network operators means many handsets are vulnerable to critical vulnerabilities. We define the FUM security metric to rank the performance of device manufacturers and network operators, based on their provision of updates and exposure to critical vulnerabilities. Using a corpus of 20 400 devices we show that there is significant variability in the timely delivery of security updates across different device manufacturers and network operators. This provides a comparison point for purchasers and regulators to determine which device manufacturers and network operators provide security updates and which do not. We find that on average 87.7% of Android devices are exposed to at least one of 11 known critical vulnerabilities and, across the ecosystem as a whole, assign a FUM security score of 2.87 out of 10. In our data, Nexus devices do considerably better than average with a score of 5.17; and LG is the best manufacturer with a score of 3.97.", "title": "" }, { "docid": "c5dd31facf6d1f7709d58e7b0ddc0bab", "text": "Website fingerprinting attacks allow a local, passive eavesdropper to identify a web browsing client’s destination web page by extracting noticeable and unique features from her traffic. Such attacks magnify the gap between privacy and security — a client who encrypts her communication traffic may still have her browsing behaviour exposed to lowcost eavesdropping. Previous authors have shown that privacysensitive clients who use anonymity technologies such as Tor are susceptible to website fingerprinting attacks, and some attacks have been shown to outperform others in specific experimental conditions. However, as these attacks differ in data collection, feature extraction and experimental setup, they cannot be compared directly. On the other side of the coin, proposed website fingerprinting defenses (countermeasures) are generally designed and tested only against specific attacks. Some defenses have been shown to fail against more advanced attacks, and it is unclear which defenses would be effective against all attacks. In this paper, we propose a feature-based comparative methodology that allows us to systematize attacks and defenses in order to compare them. We analyze attacks for their sensitivity to different packet sequence features, and analyze the effect of proposed defenses on these features by measuring whether or not the features are hidden. If a defense fails to hide a feature that an attack is sensitive to, then the defense will not work against this attack. Using this methodology, we propose a new network layer defense that can more effectively hide all of the features we consider.", "title": "" }, { "docid": "b959bce5ea9db71d677586eb1b6f023e", "text": "We consider autonomous racing of two cars and present an approach to formulate the decision making as a non-cooperative non-zero-sum game. The game is formulated by restricting both players to fulfill static track constraints as well as collision constraints which depend on the combined actions of the two players. At the same time the players try to maximize their own progress. In the case where the action space of the players is finite, the racing game can be reformulated as a bimatrix game. For this bimatrix game, we show that the actions obtained by a sequential maximization approach where only the follower considers the action of the leader are identical to a Stackelberg and a Nash equilibrium in pure strategies. Furthermore, we propose a game promoting blocking, by additionally rewarding the leading car for staying ahead at the end of the horizon. We show that this changes the Stackelberg equilibrium, but has a minor influence on the Nash equilibria. For an online implementation, we propose to play the games in a moving horizon fashion, and we present two methods for guaranteeing feasibility of the resulting coupled repeated games. Finally, we study the performance of the proposed approaches in simulation for a set-up that replicates the miniature race car tested at the Automatic Control Laboratory of ETH Zürich. The simulation study shows that the presented games can successfully model different racing behaviors and generate interesting racing situations.", "title": "" }, { "docid": "fd8a677dffe737d61ebd0e30b91595e9", "text": "Despite outstanding success in vision amongst other domains, many of the recent deep learning approaches have evident drawbacks for robots. This manuscript surveys recent work in the literature that pertain to applying deep learning systems to the robotics domain, either as means of estimation or as a tool to resolve motor commands directly from raw percepts. These recent advances are only a piece to the puzzle. We suggest that deep learning as a tool alone is insufficient in building a unified framework to acquire general intelligence. For this reason, we complement our survey with insights from cognitive development and refer to ideas from classical control theory, producing an integrated direction for a lifelong learning architecture.", "title": "" }, { "docid": "73b239e6449d82c0d9b1aaef0e9e1d23", "text": "While navigating in an environment, a vision system has to be able to recognize where it is and what the main objects in the scene are. In this paper we present a contextbased vision system for place and object recognition. The goal is to identify familiar locations (e.g., office 610, conference room 941, Main Street), to categorize new environments (office, corridor, street) and to use that information to provide contextual priors for object recognition (e.g., tables are more likely in an office than a street). We present a low-dimensional global image representation that provides relevant information for place recognition and categorization, and show how such contextual information introduces strong priors that simplify object recognition. We have trained the system to recognize over 60 locations (indoors and outdoors) and to suggest the presence and locations of more than 20 different object types. The algorithm has been integrated into a mobile system that provides realtime feedback to the user.", "title": "" }, { "docid": "2878ed8d0da40bd3363f7b8eabb79faf", "text": "In this chapter, we present the current knowledge on de novo assembly, growth, and dynamics of striated myofibrils, the functional architectural elements developed in skeletal and cardiac muscle. The data were obtained in studies of myofibrils formed in cultures of mouse skeletal and quail myotubes, in the somites of living zebrafish embryos, and in mouse neonatal and quail embryonic cardiac cells. The comparative view obtained revealed that the assembly of striated myofibrils is a three-step process progressing from premyofibrils to nascent myofibrils to mature myofibrils. This process is specified by the addition of new structural proteins, the arrangement of myofibrillar components like actin and myosin filaments with their companions into so-called sarcomeres, and in their precise alignment. Accompanying the formation of mature myofibrils is a decrease in the dynamic behavior of the assembling proteins. Proteins are most dynamic in the premyofibrils during the early phase and least dynamic in mature myofibrils in the final stage of myofibrillogenesis. This is probably due to increased interactions between proteins during the maturation process. The dynamic properties of myofibrillar proteins provide a mechanism for the exchange of older proteins or a change in isoforms to take place without disassembling the structural integrity needed for myofibril function. An important aspect of myofibril assembly is the role of actin-nucleating proteins in the formation, maintenance, and sarcomeric arrangement of the myofibrillar actin filaments. This is a very active field of research. We also report on several actin mutations that result in human muscle diseases.", "title": "" }, { "docid": "f09f1d074b1d9c72628b8eb90bce4904", "text": "Compressive Sensing, as an emerging technique in signal processing is reviewed in this paper together with its’ common applications. As an alternative to the traditional signal sampling, Compressive Sensing allows a new acquisition strategy with significantly reduced number of samples needed for accurate signal reconstruction. The basic ideas and motivation behind this approach are provided in the theoretical part of the paper. The commonly used algorithms for missing data reconstruction are presented. The Compressive Sensing applications have gained significant attention leading to an intensive growth of signal processing possibilities. Hence, some of the existing practical applications assuming different types of signals in real-world scenarios are described and analyzed as well.", "title": "" }, { "docid": "b902e6a423f6703be8ef06f77a246990", "text": "The predictive value of a comprehensive model with personality characteristics, stressor related cognitions, coping and social support was tested in a sample of 187 nonpregnant women. The emotional response to the unsuccessful treatment was predicted out of vulnerability factors assessed before the start of the treatment. The results indicated the importance of neuroticism as a vulnerability factor in emotional response to a severe stressor. They also underlined the importance of helplessness and marital dissatisfaction as additional risk factors, and acceptance and perceived social support as additional protective factors, in the development of anxiety and depression after a failed fertility treatment. From clinical point of view, these results suggest fertility-related cognitions and social support should receive attention when counselling women undergoing IVF or ICSI treatment.", "title": "" } ]
scidocsrr
018fa56d63f6b3cc429b38b9385a4aa9
A Survey on Facial Expression Recognition Techniques
[ { "docid": "ee58216dd7e3a0d8df8066703b763187", "text": "Extraction of discriminative features from salient facial patches plays a vital role in effective facial expression recognition. The accurate detection of facial landmarks improves the localization of the salient patches on face images. This paper proposes a novel framework for expression recognition by using appearance features of selected facial patches. A few prominent facial patches, depending on the position of facial landmarks, are extracted which are active during emotion elicitation. These active patches are further processed to obtain the salient patches which contain discriminative features for classification of each pair of expressions, thereby selecting different facial patches as salient for different pair of expression classes. One-against-one classification method is adopted using these features. In addition, an automated learning-free facial landmark detection technique has been proposed, which achieves similar performances as that of other state-of-art landmark detection methods, yet requires significantly less execution time. The proposed method is found to perform well consistently in different resolutions, hence, providing a solution for expression recognition in low resolution images. Experiments on CK+ and JAFFE facial expression databases show the effectiveness of the proposed system.", "title": "" } ]
[ { "docid": "48a8790474498af81f662f8195925570", "text": "Synthetic biology is a rapidly expanding discipline at the interface between engineering and biology. Much research in this area has focused on gene regulatory networks that function as biological switches and oscillators. Here we review the state of the art in the design and construction of oscillators, comparing the features of each of the main networks published to date, the models used for in silico design and validation and, where available, relevant experimental data. Trends are apparent in the ways that network topology constrains oscillator characteristics and dynamics. Also, noise and time delay within the network can both have constructive and destructive roles in generating oscillations, and stochastic coherence is commonplace. This review can be used to inform future work to design and implement new types of synthetic oscillators or to incorporate existing oscillators into new designs.", "title": "" }, { "docid": "93a8b45a6bd52f1838b1052d1fca22fc", "text": "LSHTC is a series of challenges which aims to assess the performance of classification systems in large-scale classification in a a large number of classes (up to hundreds of thousands). This paper describes the dataset that have been released along the LSHTC series. The paper details the construction of the datsets and the design of the tracks as well as the evaluation measures that we implemented and a quick overview of the results. All of these datasets are available online and runs may still be submitted on the online server of the challenges.", "title": "" }, { "docid": "498c217fb910a5b4ca6bcdc83f98c11b", "text": "Theodor Wilhelm Engelmann (1843–1909), who had a creative life in music, muscle physiology, and microbiology, developed a sensitive method for tracing the photosynthetic oxygen production of unicellular plants by means of bacterial aerotaxis (chemotaxis). He discovered the absorption spectrum of bacteriopurpurin (bacteriochlorophyll a) and the scotophobic response, photokinesis, and photosynthesis of purple bacteria.", "title": "" }, { "docid": "1991322dce13ee81885f12322c0e0f79", "text": "The quality of the interpretation of the sentiment in the online buzz in the social media and the online news can determine the predictability of financial markets and cause huge gains or losses. That is why a number of researchers have turned their full attention to the different aspects of this problem lately. However, there is no well-rounded theoretical and technical framework for approaching the problem to the best of our knowledge. We believe the existing lack of such clarity on the topic is due to its interdisciplinary nature that involves at its core both behavioral-economic topics as well as artificial intelligence. We dive deeper into the interdisciplinary nature and contribute to the formation of a clear frame of discussion. We review the related works that are about market prediction based on onlinetext-mining and produce a picture of the generic components that they all have. We, furthermore, compare each system with the rest and identify their main differentiating factors. Our comparative analysis of the systems expands onto the theoretical and technical foundations behind each. This work should help the research community to structure this emerging field and identify the exact aspects which require further research and are of special significance. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "089343ba0d94a96d6a583f1becfd7b46", "text": "In this paper we study fundamental properties of minimum inter-event times in event-triggered control systems, both in the absence and presence of external disturbances. This analysis reveals, amongst others, that for several popular event-triggering mechanisms no positive minimum inter-event time can be guaranteed in the presence of arbitrary small external disturbances. This clearly shows that it is essential to include the effects of external disturbances in the analysis of the computation/communication properties of event-triggered control systems. In fact, this paper also identifies event-triggering mechanisms that do exhibit these important event-separation properties.", "title": "" }, { "docid": "27101c9dcb89149b68d3ad47b516db69", "text": "A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or 'locked in' by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices.", "title": "" }, { "docid": "63f2acd6dd82e0aa5b414c2658da44d5", "text": "La importancia creciente de la Administración de la Producción/Operaciones está relacionada con la superación del enfoque racionalizador y centralizador de la misión de esta área de las organizaciones. El análisis, el diagnóstico y la visión estratégica de la Dirección de Operaciones permiten a la empresa acomodarse a los cambios que exige la economía moderna. Una efectiva gestión, con un flujo constante de la información, una organización del trabajo adecuada y una estructura que fomente la participación, son instrumentos imprescindibles para que las Operaciones haga su trabajo.", "title": "" }, { "docid": "f9119710fb15af38bc823e25eec5653b", "text": "The emergence of knowledge-based economies has placed an importance on effective management of knowledge. The effective management of knowledge has been described as a critical ingredient for organisation seeking to ensure sustainable strategic competitive advantage. This paper reviews literature in the area of knowledge management to bring out the importance of knowledge management in organisation. The paper is able to demonstrate that knowledge management is a key driver of organisational performance and a critical tool for organisational survival, competitiveness and profitability. Therefore creating, managing, sharing and utilizing knowledge effectively is vital for organisations to take full advantage of the value of knowledge. The paper also contributes that, in order for organisations to manage knowledge effectively, attention must be paid on three key components people, processes and technology. In essence, to ensure organisation’s success, the focus should be to connect people, processes, and technology for the purpose of leveraging knowledge.", "title": "" }, { "docid": "e9cc899155bd5f88ae1a3d5b88de52af", "text": "This article reviews research evidence showing to what extent the chronic care model can improve the management of chronic conditions (using diabetes as an example) and reduce health care costs. Thirty-two of 39 studies found that interventions based on chronic care model components improved at least 1 process or outcome measure for diabetic patients. Regarding whether chronic care model interventions can reduce costs, 18 of 27 studies concerned with 3 examples of chronic conditions (congestive heart failure, asthma, and diabetes) demonstrated reduced health care costs or lower use of health care services. Even though the chronic care model has the potential to improve care and reduce costs, several obstacles hinder its widespread adoption.", "title": "" }, { "docid": "2089349f4f1dae4d07dfec8481ba748e", "text": "A signiicant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, Trepan, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the concept represented by a given network. Our experiments demonstrate that Trepan is able to produce decision trees that maintain a high level of delity to their respective networks while being com-prehensible and accurate. Unlike previous work in this area, our algorithm is general in its applicability and scales well to large networks and problems with high-dimensional input spaces.", "title": "" }, { "docid": "39bf7e3a8e75353a3025e2c0f18768f9", "text": "Ligament reconstruction is the current standard of care for active patients with an anterior cruciate ligament (ACL) rupture. Although the majority of ACL reconstruction (ACLR) surgeries successfully restore the mechanical stability of the injured knee, postsurgical outcomes remain widely varied. Less than half of athletes who undergo ACLR return to sport within the first year after surgery, and it is estimated that approximately 1 in 4 to 1 in 5 young, active athletes who undergo ACLR will go on to a second knee injury. The outcomes after a second knee injury and surgery are significantly less favorable than outcomes after primary injuries. As advances in graft reconstruction and fixation techniques have improved to consistently restore passive joint stability to the preinjury level, successful return to sport after ACLR appears to be predicated on numerous postsurgical factors. Importantly, a secondary ACL injury is most strongly related to modifiable postsurgical risk factors. Biomechanical abnormalities and movement asymmetries, which are more prevalent in this cohort than previously hypothesized, can persist despite high levels of functional performance, and also represent biomechanical and neuromuscular control deficits and imbalances that are strongly associated with secondary injury incidence. Decreased neuromuscular control and high-risk movement biomechanics, which appear to be heavily influenced by abnormal trunk and lower extremity movement patterns, not only predict first knee injury risk but also reinjury risk. These seminal findings indicate that abnormal movement biomechanics and neuromuscular control profiles are likely both residual to, and exacerbated by, the initial injury. Evidence-based medicine (EBM) strategies should be used to develop effective, efficacious interventions targeted to these impairments to optimize the safe return to high-risk activity. In this Current Concepts article, the authors present the latest evidence related to risk factors associated with ligament failure or a secondary (contralateral) injury in athletes who return to sport after ACLR. From these data, they propose an EBM paradigm shift in postoperative rehabilitation and return-to-sport training after ACLR that is focused on the resolution of neuromuscular deficits that commonly persist after surgical reconstruction and standard rehabilitation of athletes.", "title": "" }, { "docid": "2ff08c8505e7d68304b63c6942feb837", "text": "This paper presents a Retrospective Event Detection algorithm, called Eventy-Topic Detection (ETD), which automatically generates topics that describe events in a large, temporal text corpus. Our approach leverages the structure of the topic modeling framework, specifically the Latent Dirichlet Allocation (LDA), to generate topics which are then later labeled as Eventy-Topics or non-Eventy-Topics. The system first runs daily LDA topic models, then calculates the cosine similarity between the topics of the daily topic models, and then runs our novel Bump-Detection algorithm. Similar topics labeled as an Eventy-Topic are then grouped together. The algorithm is demonstrated on two Terabyte sized corpuses a Reuters News corpus and a Twitter corpus. Our method is evaluated on a human annotated test set. Our algorithm demonstrates its ability to accurately describe and label events in a temporal text corpus.", "title": "" }, { "docid": "928e7a7abf63b8e1da14976d030f38b8", "text": "A novel Vivaldi antenna structure is proposed to broaden the bandwidth of the conventional ones. The theory of the equivalent circuit is adopted, and it is deduced that the bandwidth of the antenna can be enhanced by the high chip resistor and short pin in the new structure. An antenna of 62 mm (length) times 70 mm (width)times 0.5 mm (thickness) is designed and fabricated. The measure results show that the bandwidth is 1~ 20 GHz ( VSWR les2), while the gain varies between 0.9 and 7.8 dB. It is indicated that the antenna can be reduced to about a half of the conventional ones.", "title": "" }, { "docid": "2c73318b59e5d7101884f2563dd700b5", "text": "BACKGROUND\nEffective control of (upright) body posture requires a proper representation of body orientation. Stroke patients with pusher syndrome were shown to suffer from severely disturbed perception of own body orientation. They experience their body as oriented 'upright' when actually tilted by nearly 20 degrees to the ipsilesional side. Thus, it can be expected that postural control mechanisms are impaired accordingly in these patients. Our aim was to investigate pusher patients' spontaneous postural responses of the non-paretic leg and of the head during passive body tilt.\n\n\nMETHODS\nA sideways tilting motion was applied to the trunk of the subject in the roll plane. Stroke patients with pusher syndrome were compared to stroke patients not showing pushing behaviour, patients with acute unilateral vestibular loss, and non brain damaged subjects.\n\n\nRESULTS\nCompared to all groups without pushing behaviour, the non-paretic leg of the pusher patients showed a constant ipsiversive tilt across the whole tilt range for an amount which was observed in the non-pusher subjects when they were tilted for about 15 degrees into the ipsiversive direction.\n\n\nCONCLUSION\nThe observation that patients with acute unilateral vestibular loss showed no alterations of leg posture indicates that disturbed vestibular afferences alone are not responsible for the disordered leg responses seen in pusher patients. Our results may suggest that in pusher patients a representation of body orientation is disturbed that drives both conscious perception of body orientation and spontaneous postural adjustment of the non-paretic leg in the roll plane. The investigation of the pusher patients' leg-to-trunk orientation thus could serve as an additional bedside tool to detect pusher syndrome in acute stroke patients.", "title": "" }, { "docid": "ae534b0d19b95dcee87f06ed279fc716", "text": "In this paper, comparative study of p type and n type solar cells are described using two popular solar cell analyzing software AFORS HET and PC1D. We use SiNx layer as Antireflection Coating and a passivated layer Al2O3 .The variation of reflection, absorption, I-V characteristics, and internal and external quantum efficiency have been done by changing the thickness of passivated layer and ARC layer, and front and back surface recombination velocities. The same analysis is taken by imposing surface charge at front of n-type solar Cell and we get 20.13%-20.15% conversion efficiency.", "title": "" }, { "docid": "5eb9e759ec8fc9ad63024130f753d136", "text": "A 3-10 GHz broadband CMOS T/R switch for ultra-wideband (UWB) transceiver is presented. The broadband CMOS T/R switch is fabricated based on the 0.18 mu 1P6M standard CMOS process. On-chip measurement of the CMOS T/R switch is performed. The insertion loss of the proposed CMOS T/R Switch is about 3.1plusmn1.3dB. The return losses at both input and output terminals are higher than 14 dB. It is also characterized with 25-34dB isolation and 18-20 dBm input P1dB. The broadband CMOS T/R switch shows highly linear phase and group delay of 20plusmn10 ps from 10MHz to 15GHz. It can be easily integrated with other CMOS RFICs to form on-chip transceivers for various UWB applications", "title": "" }, { "docid": "a1d6ec19be444705fd6c339d501bce10", "text": "The transmission properties of a guide consisting of a dielectric rod of rectangular cross-section surrounded by dielectrics of smaller refractive indices are determined. This guide is the basic component in a new technology called integrated optical circuitry. The directional coupler, a particularly useful device, made of two of those guides closely spaced is also analyzed. [The SCI indicates that this paper has been cited over 145 times since 1969.]", "title": "" }, { "docid": "dc473939f83bb4752f11b9ebe37ee474", "text": "With the pervasive use of mobile devices with location sensing and positioning functions, such as Wi-Fi and GPS, people now are able to acquire present locations and collect their movement. As the availability of trajectory data prospers, mining activities hidden in raw trajectories becomes a hot research problem. Given a set of trajectories, prior works either explore density-based approaches to extract regions with high density of GPS data points or utilize time thresholds to identify users’ stay points. However, users may have different activities along with trajectories. Prior works only can extract one kind of activity by specifying thresholds, such as spatial density or temporal time threshold. In this paper, we explore both spatial and temporal relationships among data points of trajectories to extract semantic regions that refer to regions in where users are likely to have some kinds of activities. In order to extract semantic regions, we propose a sequential clustering approach to discover clusters as the semantic regions from individual trajectory according to the spatial-temporal density. Based on semantic region discovery, we develop a shared nearest neighbor (SNN) based clustering algorithm to discover the frequent semantic region where the moving object often stay, which consists of a group of similar semantic regions from multiple trajectories. Experimental results demonstrate that our techniques are more accurate than existing clustering schemes.", "title": "" }, { "docid": "f38854d7c788815d8bc6d20db284e238", "text": "This paper presents the development of a Sinhala Speech Recognition System to be deployed in an Interactive Voice Response (IVR) system of a telecommunication service provider. The main objectives are to recognize Sinhala digits and names of Sinhala songs to be set up as ringback tones. Sinhala being a phonetic language, its features are studied to develop a list of 47 phonemes. A continuous speech recognition system is developed based on Hidden Markov Model (HMM). The acoustic model is trained using the voice through mobile phone. The outcome is a speaker independent speech recognition system which is capable of recognizing 10 digits and 50 Sinhala songs. A word error rate (WER) of 11.2% using a speech corpus of 0.862 hours and a sentence error rate (SER) of 5.7% using a speech corpus of 1.388 hours are achieved for digits and songs respectively.", "title": "" } ]
scidocsrr
b69bc3b38e8c8f61db42d9f80d23e885
Study and analysis of various task scheduling algorithms in the cloud computing environment
[ { "docid": "5d56b018a1f980607d74fd5865784e1b", "text": "In this paper, we present an optimization model for task scheduling for minimizing energy consumption in cloud-computing data centers. The proposed approach was formulated as an integer programming problem to minimize the cloud-computing data center energy consumption by scheduling tasks to a minimum numbers of servers while keeping the task response time constraints. We prove that the average task response time and the number of active servers needed to meet such time constraints are bounded through the use of a greedy task-scheduling scheme. In addition, we propose the most-efficient server- first task-scheduling scheme to minimize energy expenditure as a practical scheduling scheme. We model and simulate the proposed scheduling scheme for a data center with heterogeneous tasks. The simulation results show that the proposed task-scheduling scheme reduces server energy consumption on average over 70 times when compared to the energy consumed under a (not-optimized) random-based task-scheduling scheme. We show that energy savings are achieved by minimizing the allocated number of servers.", "title": "" }, { "docid": "c039d0b6b049e3beb1fcea7595d86625", "text": "Cloud computing is known as a provider of dynamic services using very large scalable and virtualized resources over the Internet. Due to novelty of cloud computing field, there is no many standard task scheduling algorithm used in cloud environment. Especially that in cloud, there is a high communication cost that prevents well known task schedulers to be applied in large scale distributed environment. Today, researchers attempt to build job scheduling algorithms that are compatible and applicable in Cloud Computing environment Job scheduling is most important task in cloud computing environment because user have to pay for resources used based upon time. Hence efficient utilization of resources must be important and for that scheduling plays a vital role to get maximum benefit from the resources. In this paper we are studying various scheduling algorithm and issues related to them in cloud computing.", "title": "" } ]
[ { "docid": "af0bfcd39271d2c6b5734c9665f758e6", "text": "The architecture of the subterranean nests of the ant Odontomachus brunneus (Patton) (Hymenoptera: Formicidae) was studied by means of casts with dental plaster or molten metal. The entombed ants were later recovered by dissolution of plaster casts in hot running water. O. brunneus excavates simple nests, each consisting of a single, vertical shaft connecting more or less horizontal, simple chambers. Nests contained between 11 and 177 workers, from 2 to 17 chambers, and 28 to 340 cm(2) of chamber floor space and reached a maximum depth of 18 to 184 cm. All components of nest size increased simultaneously during nest enlargement, number of chambers, mean chamber size, and nest depth, making the nest shape (proportions) relatively size-independent. Regardless of nest size, all nests had approximately 2 cm(2) of chamber floor space per worker. Chambers were closer together near the top and the bottom of the nest than in the middle, and total chamber area was greater near the bottom. Colonies occasionally incorporated cavities made by other animals into their nests.", "title": "" }, { "docid": "f162f44a0a8d6e5251c731cd5259afcf", "text": "This paper proposes a method for controlling a Robotic arm using an application build in the android platform. The android phone and raspberry piboard is connected through Wi-Fi. As the name suggests the robotic arm is designed as it performs the same activity as a human hand works. A signal is generated from the android app which will be received by the raspberry pi board and the robotic arm works according to the predefined program. The android application is the command centre of the robotic arm. The program is written in the python language in the raspberry board. the different data will control the arm rotation.", "title": "" }, { "docid": "8d9f65aadba86c29cb19cd9e6eecec5a", "text": "To achieve privacy requirements, IoT application providers may need to spend a lot of money to replace existing IoT devices. To address this problem, this study proposes the Blockchain Connected Gateways (BC Gateways) to protect users from providing personal data to IoT devices without user consent. In addition, the gateways store user privacy preferences on IoT devices in the blockchain network. Therefore, this study can utilize the blockchain technology to resolve the disputes of privacy issues. In conclusion, this paper can contribute to improving user privacy and trust in IoT applications with legacy IoT devices.", "title": "" }, { "docid": "b039a40e0822408cf86b4ae3a356519a", "text": "Sortagging is a versatile method for site-specific modification of proteins as applied to a variety of in vitro reactions. Here, we explore possibilities of adapting the sortase method for use in living cells. For intracellular sortagging, we employ the Ca²⁺-independent sortase A transpeptidase (SrtA) from Streptococcus pyogenes. Substrate proteins were equipped with the C-terminal sortase-recognition motif (LPXTG); we used proteins with an N-terminal (oligo)glycine as nucleophiles. We show that sortase-dependent protein ligation can be achieved in Saccharomyces cerevisiae and in mammalian HEK293T cells, both in the cytosol and in the lumen of the endoplasmic reticulum (ER). ER luminal sortagging enables secretion of the reaction products, among which circular polypeptides. Protein ligation of substrate and nucleophile occurs within 30 min of translation. The versatility of the method is shown by protein ligation of multiple substrates with green fluorescent protein-based nucleophiles in different intracellular compartments.", "title": "" }, { "docid": "e62daef8b5273096e0f174c73e3674a8", "text": "A wide range of human-robot collaborative applications in diverse domains such as manufacturing, search-andrescue, health care, the entertainment industry, and social interactions, require an autonomous robot to follow its human companion. Different working environments and applications pose diverse challenges by adding constraints on the choice of sensors, the degree of autonomy, and dynamics of the person-following robot. Researchers have addressed these challenges in many ways and contributed to the development of a large body of literature. This paper provides a comprehensive overview of the literature by categorizing different aspects of person-following by autonomous robots. Also, the corresponding operational challenges are identified based on various design choices for ground, underwater, and aerial scenarios. In addition, state-of-the-art methods for perception, planning, control, and interaction are elaborately discussed and their applicability in varied operational scenarios are presented. Then, qualitative evaluations of some of the prominent methods are performed, corresponding practicalities are illustrated, and their feasibility is analyzed in terms of standard metrics. Furthermore, several prospective application areas are identified, and open problems are highlighted for future research.", "title": "" }, { "docid": "e43cb8fefc7735aeab0fa40ad44a2e15", "text": "Support vector machine (SVM) is an optimal margin based classification technique in machine learning. SVM is a binary linear classifier which has been extended to non-linear data using Kernels and multi-class data using various techniques like one-versus-one, one-versus-rest, Crammer Singer SVM, Weston Watkins SVM and directed acyclic graph SVM (DAGSVM) etc. SVM with a linear Kernel is called linear SVM and one with a non-linear Kernel is called non-linear SVM. Linear SVM is an efficient technique for high dimensional data applications like document classification, word-sense disambiguation, drug design etc. because under such data applications, test accuracy of linear SVM is closer to non-linear SVM while its training is much faster than non-linear SVM. SVM is continuously evolving since its inception and researchers have proposed many problem formulations, solvers and strategies for solving SVM. Moreover, due to advancements in the technology, data has taken the form of ‘Big Data’ which have posed a challenge for Machine Learning to train a classifier on this large-scale data. In this paper, we have presented a review on evolution of linear support vector machine classification, its solvers, strategies to improve solvers, experimental results, current challenges and research directions.", "title": "" }, { "docid": "ff952443eef41fb430ff2831b5ee33d5", "text": "The increasing activity in the Intelligent Transportation Systems (ITS) area faces a strong limitation: the slow pace at which the automotive industry is making cars \"smarter\". On the contrary, the smartphone industry is advancing quickly. Existing smartphones are endowed with multiple wireless interfaces and high computational power, being able to perform a wide variety of tasks. By combining smartphones with existing vehicles through an appropriate interface we are able to move closer to the smart vehicle paradigm, offering the user new functionalities and services when driving. In this paper we propose an Android-based application that monitors the vehicle through an On Board Diagnostics (OBD-II) interface, being able to detect accidents. Our proposed application estimates the G force experienced by the passengers in case of a frontal collision, which is used together with airbag triggers to detect accidents. The application reacts to positive detection by sending details about the accident through either e-mail or SMS to pre-defined destinations, immediately followed by an automatic phone call to the emergency services. Experimental results using a real vehicle show that the application is able to react to accident events in less than 3 seconds, a very low time, validating the feasibility of smartphone based solutions for improving safety on the road.", "title": "" }, { "docid": "491d98644c62c6b601657e235cb48307", "text": "The purpose of this study was to investigate the use of three-dimensional display formats for judgments of spatial information using an exocentric frame of reference. Eight subjects judged the azimuth and elevation that separated two computer-generated objects using either a perspective or stereoscopic display. Errors, which consisted of the difference in absolute value between the estimated and actual azimuth or elevation, were analyzed as the response variable. The data indicated that the stereoscopic display resulted in more accurate estimates of elevation, especially for images aligned approximately orthogonally to the viewing vector. However, estimates of relative azimuth direction were not improved by use of the stereoscopic display. Furthermore, it was shown that the effect of compression resulting from a 45-deg computer graphics eye point elevation produced a response bias that was symmetrical around the horizontal plane of the reference cube, and that the depth cue of binocular disparity provided by the stereoscopic display reduced the magnitude of the compression errors. Implications of the results for the design of spatial displays are discussed.", "title": "" }, { "docid": "2c87f9ef35795c89de6b60e1ceff18c8", "text": "The paper presents a fusion-tracker and pedestrian classifier for color and thermal cameras. The tracker builds a background model as a multi-modal distribution of colors and temperatures. It is constructed as a particle filter that makes a number of informed reversible transformations to sample the model probability space in order to maximize posterior probability of the scene model. Observation likelihoods of moving objects account their 3D locations with respect to the camera and occlusions by other tracked objects as well as static obstacles. After capturing the coordinates and dimensions of moving objects we apply a pedestrian classifier based on periodic gait analysis. To separate humans from other moving objects, such as cars, we detect, in human gait, a symmetrical double helical pattern, that can then be analyzed using the Frieze Group theory. The results of tracking on color and thermal sequences demonstrate that our algorithm is robust to illumination noise and performs well in the outdoor environments.", "title": "" }, { "docid": "641bc7bfd28f3df41dd0eaef0543832a", "text": "Monitoring parameters characterizing water quality, such as temperature, pH, and concentrations of heavy metals in natural waters, is often followed by transmitting the data to remote receivers using telemetry systems. Such systems are commonly powered by batteries, which can be inconvenient at times because batteries have a limited lifetime and must be recharged or replaced periodically to ensure that sufficient energy is available to power the electronics. To avoid these inconveniences, a microbial fuel cell was designed to power electrochemical sensors and small telemetry systems to transmit the data acquired by the sensors to remote receivers. The microbial fuel cell was combined with low-power, high-efficiency electronic circuitry providing a stable power source for wireless data transmission. To generate enough power for the telemetry system, energy produced by the microbial fuel cell was stored in a capacitor and used in short bursts when needed. Since commercial electronic circuits require a minimum 3.3 V input and our cell was able to deliver a maximum of 2.1 V, a DC-DC converter was used to boost the potential. The DC-DC converter powered a transmitter, which gathered the data from the sensor and transmitted it wirelessly to a remote receiver. To demonstrate the utility of the system, temporal variations in temperature were measured, and the data were wirelessly transmitted to a remote receiver.", "title": "" }, { "docid": "92628edcee9908713607a0dd36591194", "text": "OBJECTIVE\nTo describe the methodology utilized to calculate reliability and the generation of norms for 10 neuropsychological tests for children in Spanish-speaking countries.\n\n\nMETHOD\nThe study sample consisted of over 4,373 healthy children from nine countries in Latin America (Chile, Cuba, Ecuador, Guatemala, Honduras, Mexico, Paraguay, Peru, and Puerto Rico) and Spain. Inclusion criteria for all countries were to have between 6 to 17 years of age, an Intelligence Quotient of≥80 on the Test of Non-Verbal Intelligence (TONI-2), and score of <19 on the Children's Depression Inventory. Participants completed 10 neuropsychological tests. Reliability and norms were calculated for all tests.\n\n\nRESULTS\nTest-retest analysis showed excellent or good- reliability on all tests (r's>0.55; p's<0.001) except M-WCST perseverative errors whose coefficient magnitude was fair. All scores were normed using multiple linear regressions and standard deviations of residual values. Age, age2, sex, and mean level of parental education (MLPE) were included as predictors in the models by country. The non-significant variables (p > 0.05) were removed and the analysis were run again.\n\n\nCONCLUSIONS\nThis is the largest Spanish-speaking children and adolescents normative study in the world. For the generation of normative data, the method based on linear regression models and the standard deviation of residual values was used. This method allows determination of the specific variables that predict test scores, helps identify and control for collinearity of predictive variables, and generates continuous and more reliable norms than those of traditional methods.", "title": "" }, { "docid": "473eebca6dccf4e242c87bbabfd4b8a5", "text": "Text analytics systems often rely heavily on detecting and linking entity mentions in documents to knowledge bases for downstream applications such as sentiment analysis, question answering and recommender systems. A major challenge for this task is to be able to accurately detect entities in new languages with limited labeled resources. In this paper we present an accurate and lightweight, multilingual named entity recognition (NER) and linking (NEL) system. The contributions of this paper are three-fold: 1) Lightweight named entity recognition with competitive accuracy; 2) Candidate entity retrieval that uses search click-log data and entity embeddings to achieve high precision with a low memory footprint; and 3) efficient entity disambiguation. Our system achieves state-of-the-art performance on TAC KBP 2013 multilingual data and on English AIDA CONLL data.", "title": "" }, { "docid": "a50151963608bccdcb53b3f390db6918", "text": "In order to obtain more value added products, a product quality control is essentially required Many studies show that quality of agriculture products may be reduced from many causes. One of the most important factors of such quality plant diseases. Consequently, minimizing plant diseases allows substantially improving quality of the product Suitable diagnosis of crop disease in the field is very critical for the increased production. Foliar is the major important fungal disease of cotton and occurs in all growing Indian cotton regions. In this paper I express Technological Strategies uses mobile captured symptoms of Cotton Leaf Spot images and categorize the diseases using support vector machine. The classifier is being trained to achieve intelligent farming, including early detection of disease in the groves, selective fungicide application, etc. This proposed work is based on Segmentation techniques in which, the captured images are processed for enrichment first. Then texture and color Feature extraction techniques are used to extract features such as boundary, shape, color and texture for the disease spots to recognize diseases.", "title": "" }, { "docid": "29eebb40973bdfac9d1f1941d4c7c889", "text": "This paper explains a procedure for getting models of robot kinematics and dynamics that are appropriate for robot control design. The procedure consists of the following steps: 1) derivation of robot kinematic and dynamic models and establishing correctness of their structures; 2) experimental estimation of the model parameters; 3) model validation; and 4) identification of the remaining robot dynamics, not covered with the derived model. We give particular attention to the design of identification experiments and to online reconstruction of state coordinates, as these strongly influence the quality of the estimation process. The importance of correct friction modeling and the estimation of friction parameters are illuminated. The models of robot kinematics and dynamics can be used in model-based nonlinear control. The remaining dynamics cannot be ignored if high-performance robot operation with adequate robustness is required. The complete procedure is demonstrated for a direct-drive robotic arm with three rotational joints.", "title": "" }, { "docid": "6f5afc38b09fa4fd1e47d323cfe850c9", "text": "In the past several years there has been extensive research into honeypot technologies, primarily for detection and information gathering against external threats. However, little research has been done for one of the most dangerous threats, the advance insider, the trusted individual who knows your internal organization. These individuals are not after your systems, they are after your information. This presentation discusses how honeypot technologies can be used to detect, identify, and gather information on these specific threats.", "title": "" }, { "docid": "b317f33d159bddce908df4aa9ba82cf9", "text": "Point cloud source data for surface reconstruction is usually contaminated with noise and outliers. To overcome this deficiency, a density-based point cloud denoising method is presented to remove outliers and noisy points. First, particle-swam optimization technique is employed for automatically approximating optimal bandwidth of multivariate kernel density estimation to ensure the robust performance of density estimation. Then, mean-shift based clustering technique is used to remove outliers through a thresholding scheme. After removing outliers from the point cloud, bilateral mesh filtering is applied to smooth the remaining points. The experimental results show that this approach, comparably, is robust and efficient.", "title": "" }, { "docid": "72e9ed1d81f8dfce9492f5bb30fc91a1", "text": "A key component to the success of deep learning is the availability of massive amounts of training data. Building and annotating large datasets for solving medical image classification problems is today a bottleneck for many applications. Recently, capsule networks were proposed to deal with shortcomings of Convolutional Neural Networks (ConvNets). In this work, we compare the behavior of capsule networks against ConvNets under typical datasets constraints of medical image analysis, namely, small amounts of annotated data and class-imbalance. We evaluate our experiments on MNIST, Fashion-MNIST and medical (histological and retina images) publicly available datasets. Our results suggest that capsule networks can be trained with less amount of data for the same or better performance and are more robust to an imbalanced class distribution, which makes our approach very promising for the medical imaging community.", "title": "" }, { "docid": "291b8dc672341fbc286e89eefc46a1b1", "text": "We present an introduction to and a tutorial on the properties of the recently discovered ideal circuit element, a memristor. By definition, a memristor M relates the charge q and the magnetic flux φ in a circuit and complements a resistor R, a capacitor C and an inductor L as an ingredient of ideal electrical circuits. The properties of these three elements and their circuits are a part of the standard curricula. The existence of the memristor as the fourth ideal circuit element was predicted in 1971 based on symmetry arguments, but was clearly experimentally demonstrated just last year. We present the properties of a single memristor, memristors in series and parallel, as well as ideal memristor–capacitor (MC), memristor–inductor (ML) and memristor– capacitor–inductor (MCL) circuits. We find that the memristor has hysteretic current–voltage characteristics. We show that the ideal MC (ML) circuit undergoes non-exponential charge (current) decay with two time scales and that by switching the polarity of the capacitor, an ideal MCL circuit can be tuned from overdamped to underdamped. We present simple models which show that these unusual properties are closely related to the memristor’s internal dynamics. This tutorial complements the pedagogy of ideal circuit elements (R,C and L) and the properties of their circuits, and is aimed at undergraduate physics and electrical engineering students. (Some figures in this article are in colour only in the electronic version)", "title": "" }, { "docid": "f7c427f1bf94aa37c726a40254e9638c", "text": "Document classification for text, images and other applicable entities has long been a focus of research in academia and also finds application in many industrial settings. Amidst a plethora of approaches to solve such problems, machine-learning techniques have found success in a variety of scenarios. In this paper we discuss the design of a machine learning-based semi-supervised job title classification system for the online job recruitment domain currently in production at CareerBuilder.com and propose enhancements to it. The system leverages a varied collection of classification as well clustering algorithms. These algorithms are encompassed in an architecture that facilitates leveraging existing off-the-shelf machine learning tools and techniques while keeping into consideration the challenges of constructing a scalable classification system for a large taxonomy of categories. As a continuously evolving system that is still under development we first discuss the existing semi-supervised classification system which is composed of both clustering and classification components in a proximity-based classifier setup and results of which are already used across numerous products at CareerBuilder. We then elucidate our long-term goals for job title classification and propose enhancements to the existing system in the form of a two-stage coarse and fine level classifier augmentation to construct a cascade of hierarchical vertical classifiers. Preliminary results are presented using experimental evaluation on real world industrial data.", "title": "" }, { "docid": "8cda36e81db2bce7f9b648a20c0a55a5", "text": "Scalable and effective analysis of large text corpora remains a challenging problem as our ability to collect textual data continues to increase at an exponential rate. To help users make sense of large text corpora, we present a novel visual analytics system, Parallel-Topics, which integrates a state-of-the-art probabilistic topic model Latent Dirichlet Allocation (LDA) with interactive visualization. To describe a corpus of documents, ParallelTopics first extracts a set of semantically meaningful topics using LDA. Unlike most traditional clustering techniques in which a document is assigned to a specific cluster, the LDA model accounts for different topical aspects of each individual document. This permits effective full text analysis of larger documents that may contain multiple topics. To highlight this property of the model, ParallelTopics utilizes the parallel coordinate metaphor to present the probabilistic distribution of a document across topics. Such representation allows the users to discover single-topic vs. multi-topic documents and the relative importance of each topic to a document of interest. In addition, since most text corpora are inherently temporal, ParallelTopics also depicts the topic evolution over time. We have applied ParallelTopics to exploring and analyzing several text corpora, including the scientific proposals awarded by the National Science Foundation and the publications in the VAST community over the years. To demonstrate the efficacy of ParallelTopics, we conducted several expert evaluations, the results of which are reported in this paper.", "title": "" } ]
scidocsrr
00511163313974cf801a2e7e11333717
Channel coordination in green supply chain management
[ { "docid": "3bbbce07c492a3e870df4f71a7f42b5c", "text": "The supply chain has been traditionally defined as a one-way, integrated manufacturing process wherein raw materials are converted into final products, then delivered to customers. Under this definition, the supply chain includes only those activities associated with manufacturing, from raw material acquisition to final product delivery. However, due to recent changing environmental requirements affecting manufacturing operations, increasing attention is given to developing environmental management (EM) strategies for the supply chain. This research: (1) investigates the environmental factors leading to the development of an extended environmental supply chain, (2) describes the elemental differences between the extended supply chain and the traditional supply chain, (3) describes the additional challenges presented by the extension, (4) presents performance measures appropriate for the extended supply chain, and (5) develops a general procedure towards achieving and maintaining the green supply chain.", "title": "" } ]
[ { "docid": "ac529a455bcefa58abafa6c679bec2b4", "text": "This article presents near-optimal guarantees for stable and robust image recovery from undersampled noisy measurements using total variation minimization. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be reconstructed to within the best s-term approximation of its gradient up to a logarithmic factor, and this factor can be removed by taking slightly more measurements. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of suitably incoherent matrices.", "title": "" }, { "docid": "0209132c7623c540c125a222552f33ac", "text": "This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm, is also a poor choice in the case of virtual marketing and identifies two main limitations of the framework in online environments: the drastically diminished role of the Ps and the lack of any strategic elements in the model. Next to identifying the critical factors of the Web marketing, the paper argues that the basis for successful E-Commerce is the full integration of the virtual activities into the company’s physical strategy, marketing plan and organisational processes. The four S elements of the Web-Marketing Mix framework present a sound and functional conceptual basis for designing, developing and commercialising Business-to-Consumer online projects. The model was originally developed for educational purposes and has been tested and refined by means of field projects; two of them are presented as case studies in the paper.  2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "72173ef38d5fd62f73de467e722f970e", "text": "This study uses data collected from adult U.S. residents in 2004 and 2005 to examine whether loneliness and life satisfaction are associated with time spent at home on various Internet activities. Cross-sectional models reveal that time spent browsing the web is positively related to loneliness and negatively related to life satisfaction. Some of the relationships revealed by cross-sectional models persist even when considering the same individuals over time in fixed-effects models that account for time-invariant, individual-level characteristics. Our results vary according to how the time use data were collected, indicating that survey design can have important consequences for research in this area. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "dcc0237d174b6d41d4a4bcd4e00d172e", "text": "Meander line antenna (MLA) is an electrically small antenna which poses several performance related issues such as narrow bandwidth, high VSWR, low gain and high cross polarization levels. This paper describe the design ,simulation and development of meander line microstrip antenna at wireless band, the antenna was modeled using microstrip lines and S parameter for the antenna was obtained. The properties of the antenna such as bandwidth, beamwidth, gain, directivity, return loss and polarization were obtained.", "title": "" }, { "docid": "66432ab91b459c3de8e867c8214029d8", "text": "Distributional hypothesis lies in the root of most existing word representation models by inferring word meaning from its external contexts. However, distributional models cannot handle rare and morphologically complex words very well and fail to identify some finegrained linguistic regularity as they are ignoring the word forms. On the contrary, morphology points out that words are built from some basic units, i.e., morphemes. Therefore, the meaning and function of such rare words can be inferred from the words sharing the same morphemes, and many syntactic relations can be directly identified based on the word forms. However, the limitation of morphology is that it cannot infer the relationship between two words that do not share any morphemes. Considering the advantages and limitations of both approaches, we propose two novel models to build better word representations by modeling both external contexts and internal morphemes in a jointly predictive way, called BEING and SEING. These two models can also be extended to learn phrase representations according to the distributed morphology theory. We evaluate the proposed models on similarity tasks and analogy tasks. The results demonstrate that the proposed models can outperform state-of-the-art models significantly on both word and phrase representation learning.", "title": "" }, { "docid": "d8cc9c70034b484a066d1dc74724eaab", "text": "An enhanced but simple triple band circular ring patch antenna with a new slotting technique is presented, which is most suitable for X-band, Ku-band and K-band applications. This compact micro strip antenna is obtained by inserting small rectangular strip in a circular ring patch antenna. The antenna has been designed and simulated on an FR4 substrate with dielectric constant of 4.4 and thickness of 2mm. The design is analysed by Finite Element Method based HFSS Simulator Software (version 14.0), The simulated return losses obtained are -35.80dB, -42.39dB, and -44.98dB at 8.96 GHz, 14.44 GHz, 18.97 GHz respectively. Therefore, this antenna can be applicable for X-band, Ku-band and K-band applications respectively.", "title": "" }, { "docid": "cfec098f84e157a2e12f0ff40551c977", "text": "In this paper, an online news recommender system for the popular social network, Facebook, is described. This system provides daily newsletters for communities on Facebook. The system fetches the news articles and filters them based on the community description to prepare the daily news digest. Explicit survey feedback from the users show that most users found the application useful and easy to use. They also indicated that they could get some community specific articles that they would not have got otherwise.", "title": "" }, { "docid": "3f1161fa81b19a15b0d4ff882b99b60a", "text": "INTRODUCTION\nDupilumab is a fully human IgG4 monoclonal antibody directed against the α subunit of the interleukin (IL)-4 receptor (IL-4Rα). Since the activation of IL-4Rα is utilized by both IL-4 and IL-13 to mediate their pathophysiological effects, dupilumab behaves as a dual antagonist of these two sister cytokines, which blocks IL-4/IL-13-dependent signal transduction. Areas covered: Herein, the authors review the cellular and molecular pathways activated by IL-4 and IL-13, which are relevant to asthma pathobiology. They also review: the mechanism of action of dupilumab, the phase I, II and III studies evaluating the pharmacokinetics as well as the safety, tolerability and clinical efficacy of dupilumab in asthma therapy. Expert opinion: Supported by a strategic mechanism of action, as well as by convincing preliminary clinical results, dupilumab currently appears to be a very promising biological drug for the treatment of severe uncontrolled asthma. It also may have benefits to comorbidities of asthma including atopic dermatitis, chronic sinusitis and nasal polyposis.", "title": "" }, { "docid": "bad0f688ae12916688e8a3a8d96a5565", "text": "This paper presents a method for creating coherently animated line drawings that include strong abstraction and stylization effects. These effects are achieved with active strokes: 2D contours that approximate and track the lines of an animated 3D scene. Active strokes perform two functions: they connect and smooth unorganized line samples, and they carry coherent parameterization to support stylized rendering. Line samples are approximated and tracked using active contours (\"snakes\") that automatically update their arrangment and topology to match the animation. Parameterization is maintained by brush paths that follow the snakes but are independent, permitting substantial shape abstraction without compromising fidelity in tracking. This approach renders complex models in a wide range of styles at interactive rates, making it suitable for applications like games and interactive illustrations.", "title": "" }, { "docid": "67476959e7b75e52b4e33776b8a10bb9", "text": "The volume of energy loss that Brazilian electrical utilities have to deal with has been ever increasing. Electricity distribution companies have suffered significant and increasing losses in the last years, due to theft, measurement errors and other irregularities. Therefore there is a great concern to identify the profile of irregular customers, in order to reduce the volume of such losses. This paper presents a combined approach of a neural networks committee and a neuro-fuzzy hierarchical system intended to increase the level of accuracy in the identification of irregularities among low voltage consumers. The data used to test the proposed system are from Light S.A., the distribution company of Rio de Janeiro. The results obtained presented a significant increase in the identification of irregular customers when compared to the current methodology employed by the company. Keywords— neural nets, hierarchical neuro-fuzzy systems, binary space partition, electricity distribution, fraud detection.", "title": "" }, { "docid": "101af3fab1f8abb4e2b75a067031048a", "text": "Although research on trust in an organizational context has advanced considerably in recent years, the literature has yet to produce a set of generalizable propositions that inform our understanding of the organization and coordination of work. We propose that conceptualizing trust as an organizing principle is a powerful way of integrating the diverse trust literature and distilling generalizable implications for how trust affects organizing. We develop the notion of trust as an organizing principle by specifying structuring and mobilizing as two sets of causal pathways through which trust influences several important properties of organizations. We further describe specific mechanisms within structuring and mobilizing that influence interaction patterns and organizational processes. The principal aim of the framework is to advance the literature by connecting the psychological and sociological micro-foundations of trust with the macro-bases of organizing. The paper concludes by demonstrating how the framework can be applied to yield novel insights into traditional views of organizations and to stimulate original and innovative avenues of organizational research that consider both the benefits and downsides of trust. (Trust; Organizing Principle; Structuring; Mobilizing) Introduction In the introduction to this special issue we observed that empirical research on trust was not keeping pace with theoretical developments in the field. We viewed this as a significant limitation and surmised that a special issue devoted to empirical research on trust would serve as a valuable vehicle for advancing the literature. In addition to the lack of empirical research, we would also make the observation that theories and evidence accumulating on trust in organizations is not well integrated and that the literature as a whole lacks coherence. At a general level, extant research provides “accumulating evidence that trust has a number of important benefits for organizations and their members” (Kramer 1999, p. 569). More specifically, Dirks and Ferrin’s (2001) review of the literature points to two distinct means through which trust generates these benefits. The dominant approach emphasizes the direct effects that trust has on important organizational phenomena such as: communication, conflict management, negotiation processes, satisfaction, and performance (both individual and unit). A second, less well studied, perspective points to the enabling effects of trust, whereby trust creates or enhances the conditions, such as positive interpretations of another’s behavior, that are conducive to obtaining organizational outcomes like cooperation and higher performance. The identification of these two perspectives provides a useful way of organizing the literature and generating insight into the mechanisms through which trust influences organizational outcomes. However, we are still left with a set of findings that have yet to be integrated on a theoretical level in a way that yields a set of generalizable propositions about the effects of trust on organizing. We believe this is due to the fact that research has, for the most part, embedded trust into existing theories. As a result, trust has been studied in a variety of different ways to address a wide range of organizational questions. This has yielded a diverse and eclectic body of knowledge about the relationship between trust and various organizational outcomes. At the same time, this approach has resulted in a somewhat fragmented view of the role of trust in an organizational context as a whole. In the remainder of this paper we begin to address the challenge of integrating the fragmented trust literature. While it is not feasible to develop a comprehensive framework that synthesizes the vast and diverse trust literature in a single paper, we draw together several key strands that relate to the organizational context. In particular, our paper aims to advance the literature by connecting the psychological and sociological microfoundations of trust with the macro-bases of organizing. BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle 92 ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 Specifically, we propose that reconceptualizing trust as an organizing principle is a fruitful way of viewing the role of trust and comprehending how research on trust advances our understanding of the organization and coordination of economic activity. While it is our goal to generate a framework that coalesces our thinking about the processes through which trust, as an organizing principle, affects organizational life, we are not Pollyannish: trust indubitably has a down side, which has been little researched. We begin by elaborating on the notion of an organizing principle and then move on to conceptualize trust from this perspective. Next, we describe a set of generalizable causal pathways through which trust affects organizing. We then use that framework to identify some exemplars of possible research questions and to point to possible downsides of trust. Organizing Principles As Ouchi (1980) discusses, a fundamental purpose of organizations is to attain goals that require coordinated efforts. Interdependence and uncertainty make goal attainment more difficult and create the need for organizational solutions. The subdivision of work implies that actors must exchange information and rely on others to accomplish organizational goals without having complete control over, or being able to fully monitor, others’ behaviors. Coordinating actions is further complicated by the fact that actors cannot assume that their interests and goals are perfectly aligned. Consequently, relying on others is difficult when there is uncertainty about their intentions, motives, and competencies. Managing interdependence among individuals, units, and activities in the face of behavioral uncertainty constitutes a key organizational challenge. Organizing principles represent a way of solving the problem of interdependence and uncertainty. An organizing principle is the logic by which work is coordinated and information is gathered, disseminated, and processed within and between organizations (Zander and Kogut 1995). An organizing principle represents a heuristic for how actors interpret and represent information and how they select appropriate behaviors and routines for coordinating actions. Examples of organizing principles include: market, hierarchy, and clan (Ouchi 1980). Other have referred to these organizing principles as authority, price, and norms (Adler 2001, Bradach and Eccles 1989, Powell 1990). Each of these principles operates on the basis of distinct mechanisms that orient, enable, and constrain economic behavior. For instance, authority as an organizing principle solves the problem of coordinating action in the face of interdependence and uncertainty by reallocating decision-making rights (Simon 1957, Coleman 1990). Price-based organizing principles revolve around the idea of making coordination advantageous for each party involved by aligning incentives (Hayek 1948, Alchian and Demsetz 1972). Compliance to internalized norms and the resulting self-control of the clan form is another organizing principle that has been identified as a means of achieving coordinated action (Ouchi 1980). We propose that trust is also an organizing principle and that conceptualizing trust in this way provides a powerful means of integrating the disparate research on trust and distilling generalizable implications for how trust affects organizing. We view trust as most closely related to the clan organizing principle. By definition clans rely on trust (Ouchi 1980). However, trust can and does occur in organizational contexts outside of clans. For instance, there are a variety of organizational arrangements where cooperation in mixed-motive situations depends on trust, such as in repeated strategic alliances (Gulati 1995), buyer-supplier relationships (Dyer and Chu this issue), and temporary groups in organizations (Meyerson et al. 1996). More generally, we believe that trust frequently operates in conjunction with other organizing principles. For instance, Dirks (2000) found that while authority is important for behaviors that can be observed or controlled, trust is important when there exists performance ambiguity or behaviors that cannot be observed or controlled. Because most organizations have a combination of behaviors that can and cannot be observed or controlled, authority and trust co-occur. More generally, we believe that mixed or plural forms are the norm, consistent with Bradach and Eccles (1989). In some situations, however, trust may be the primary organizing principle, such as when monitoring and formal controls are difficult and costly to use. In these cases, trust represents an efficient choice. In other situations, trust may be relied upon due to social, rather than efficiency, considerations. For instance, achieving a sense of personal belonging within a collectivity (Podolny and Barron 1997) and the desire to develop and maintain rewarding social attachments (Granovetter 1985) may serve as the impetus for relying on trust as an organizing principle. Trust as an Organizing Principle At a general level trust is the willingness to accept vulnerability based on positive expectations about another’s intentions or behaviors (Mayer et al. 1995, Rousseau et al. 1998). Because trust represents a positive assumption BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 93 about the motives and intentions of another party, it allows people to economize on information processing and safeguarding behaviors. By representing an expectation that others will act in a way that serves, or at least is not inimical to, one’s interests (Gambetta 1988), trust as a heuristic is a frame of reference that al", "title": "" }, { "docid": "ff2b53e0cecb849d1cbb503300f1ab9a", "text": "Receiving rapid, accurate and comprehensive knowledge about the conditions of damaged buildings after earthquake strike and other natural hazards is the basis of many related activities such as rescue, relief and reconstruction. Recently, commercial high-resolution satellite imagery such as IKONOS and QuickBird is becoming more powerful data resource for disaster management. In this paper, a method for automatic detection and classification of damaged buildings using integration of high-resolution satellite imageries and vector map is proposed. In this method, after extracting buildings position from vector map, they are located in the pre-event and post-event satellite images. By measuring and comparing different textural features for extracted buildings in both images, buildings conditions are evaluated through a Fuzzy Inference System. Overall classification accuracy of 74% and kappa coefficient of 0.63 were acquired. Results of the proposed method, indicates the capability of this method for automatic determination of damaged buildings from high-resolution satellite imageries.", "title": "" }, { "docid": "72c164c281e98386a054a25677c21065", "text": "The rapid digitalisation of the hospitality industry over recent years has brought forth many new points of attack for consideration. The hasty implementation of these systems has created a reality in which businesses are using the technical solutions, but employees have very little awareness when it comes to the threats and implications that they might present. This gap in awareness is further compounded by the existence of preestablished, often rigid, cultures that drive how hospitality businesses operate. Potential attackers are recognising this and the last two years have seen a huge increase in cyber-attacks within the sector.Attempts at addressing the increasing threats have taken the form of technical solutions such as encryption, access control, CCTV, etc. However, a high majority of security breaches can be directly attributed to human error. It is therefore necessary that measures for addressing the rising trend of cyber-attacks go beyond just providing technical solutions and make provision for educating employees about how to address the human elements of security. Inculcating security awareness amongst hospitality employees will provide a foundation upon which a culture of security can be created to promote the seamless and secured interaction of hotel users and technology.One way that the hospitality industry has tried to solve the awareness issue is through their current paper-based training. This is unengaging, expensive and presents limited ways to deploy, monitor and evaluate the impact and effectiveness of the content. This leads to cycles of constant training, making it very hard to initiate awareness, particularly within those on minimum waged, short-term job roles.This paper presents a structured approach for eliciting industry requirement for developing and implementing an immersive Cyber Security Awareness learning platform. It used a series of over 40 interviews and threat analysis of the hospitality industry to identify the requirements for designing and implementing cyber security program which encourage engagement through a cycle of reward and recognition. In particular, the need for the use of gamification elements to provide an engaging but gentle way of educating those with little or no desire to learn was identified and implemented. Also presented is a method for guiding and monitoring the impact of their employee’s progress through the learning management system whilst monitoring the levels of engagement and positive impact the training is having on the business.", "title": "" }, { "docid": "d781c28e343d63babafb0fd1353ae62c", "text": "The present study evaluated the personality characteristics and psychopathology of internet sex offenders (ISOs) using the Minnesota Multiphasic Personality Inventory, Second Edition (MMPI-2) to determine whether ISO personality profiles are different to those of general sex offenders (GSOs; e.g. child molesters and rapists). The ISOs consisted of 48 convicted males referred to a private sex offender treatment facility for a psychosexual risk assessment. The GSOs consisted of 104 incarcerated non-internet or general sex offenders. Findings indicated that ISOs scored significantly lower on the following scales: L, F, Pd and Sc. A comparison of the MMPI-2 scores of the ISO and GSO groups indicated that ISOs are a heterogeneous group with considerable withingroup differences. Current findings are consistent with the existing literature on the limited utility of the MMPI-2 in differentiating between subtypes of sex offenders.", "title": "" }, { "docid": "04bc7757006176cd1307874d19b11dc6", "text": "AIMS\nCompare vaginal resting pressure (VRP), pelvic floor muscle (PFM) strength, and endurance in women with and without diastasis recti abdominis at gestational week 21 and at 6 weeks, 6 months, and 12 months postpartum. Furthermore, to compare prevalence of urinary incontinence (UI) and pelvic organ prolapse (POP) in the two groups at the same assessment points.\n\n\nMETHODS\nThis is a prospective cohort study following 300 nulliparous pregnant women giving birth at a public university hospital. VRP, PFM strength, and endurance were measured with vaginal manometry. ICIQ-UI-SF questionnaire and POP-Q were used to assess UI and POP. Diastasis recti abdominis was diagnosed with palpation of  ≥2 fingerbreadths 4.5 cm above, at, or 4.5 cm below the umbilicus.\n\n\nRESULTS\nAt gestational week 21 women with diastasis recti abdominis had statistically significant greater VRP (mean difference 3.06 cm H2 O [95%CI: 0.70; 5.42]), PFM strength (mean difference 5.09 cm H2 O [95%CI: 0.76; 9.42]) and PFM muscle endurance (mean difference 47.08 cm H2 O sec [95%CI: 15.18; 78.99]) than women with no diastasis. There were no statistically significant differences between women with and without diastasis in any PFM variables at 6 weeks, 6 months, and 12 months postpartum. No significant difference was found in prevalence of UI in women with and without diastasis at any assessment points. Six weeks postpartum 15.9% of women without diastasis had POP versus 4.1% in the group with diastasis (P = 0.001).\n\n\nCONCLUSIONS\nWomen with diastasis were not more likely to have weaker PFM or more UI or POP. Neurourol. Urodynam. 36:716-721, 2017. © 2016 Wiley Periodicals, Inc.", "title": "" }, { "docid": "9ce5377315e50c70337aa4b7d6512de0", "text": "This paper discusses two main software engineering methodologies to system development, the waterfall model and the objectoriented approach. A review of literature reveals that waterfall model uses linear approach and is only suitable for sequential or procedural design. In waterfall, errors can only be detected at the end of the whole process and it may be difficult going back to repeat the entire process because the processes are sequential. Also, software based on waterfall approach is difficult to maintain and upgrade due to lack of integration between software components. On the other hand, the Object Oriented approach enables software systems to be developed as integration of software objects that work together to make a holistic and functional system. The software objects are independent of each other, allowing easy upgrading and maintenance of software codes. The paper also highlighted the merits and demerits of each of the approaches. This work concludes with the appropriateness of each approach in relation to the complexity of the problem domain.", "title": "" }, { "docid": "d6a6ee23cd1d863164c79088f75ece30", "text": "In our work, 3D objects classification has been dealt with convolutional neural networks which is a common paradigm recently in image recognition. In the first phase of experiments, 3D models in ModelNet10 and ModelNet40 data sets were voxelized and scaled with certain parameters. Classical CNN and 3D Dense CNN architectures were designed for training the pre-processed data. In addition, the two trained CNNs were ensembled and the results of them were observed. A success rate of 95.37% achieved on ModelNet10 by using 3D dense CNN, a success rate of 91.24% achieved with ensemble of two CNNs on ModelNet40.", "title": "" }, { "docid": "f66ebffa2efda9a4728a85c0b3a94fc7", "text": "The vulnerability of face recognition systems is a growing concern that has drawn the interest from both academic and research communities. Despite the availability of a broad range of face presentation attack detection (PAD) (or countermeasure or antispoofing) schemes, there exists no superior PAD technique due to evolution of sophisticated presentation attacks (or spoof attacks). In this paper, we present a new perspective for face presentation attack detection by introducing light field camera (LFC). Since the use of a LFC can record the direction of each incoming ray in addition to the intensity, it exhibits an unique characteristic of rendering multiple depth (or focus) images in a single capture. Thus, we present a novel approach that involves exploring the variation of the focus between multiple depth (or focus) images rendered by the LFC that in turn can be used to reveal the presentation attacks. To this extent, we first collect a new face artefact database using LFC that comprises of 80 subjects. Face artefacts are generated by simulating two widely used attacks, such as photo print and electronic screen attack. Extensive experiments carried out on the light field face artefact database have revealed the outstanding performance of the proposed PAD scheme when benchmarked with various well established state-of-the-art schemes.", "title": "" }, { "docid": "1b151d173825de2a2b43df8057d1a09d", "text": "An organisation can significantly improve its performance by observing how their business operations are currently being carried out. A great way to derive evidence-based process improvement insights is to compare the behaviour and performance of processes for different process cohorts by utilising the information recorded in event logs. A process cohort is a coherent group of process instances that has one or more shared characteristics. Such process performance comparisons can highlight positive or negative variations that can be evident in a particular cohort, thus enabling a tailored approach to process improvement. Although existing process mining techniques can be used to calculate various statistics from event logs for performance analysis, most techniques calculate and display the statistics for each cohort separately. Furthermore, the numerical statistics and simple visualisations may not be intuitive enough to allow users to compare the performance of various cohorts efficiently and effectively. We developed a novel visualisation framework for log-based process performance comparison to address these issues. It enables analysts to quickly identify the performance differences between cohorts. The framework supports the selection of cohorts and a three-dimensional visualisation to compare the cohorts using a variety of performance metrics. The approach has been implemented as a set of plug-ins within the open source process mining framework ProM and has been evaluated using two real-life datasets from the insurance domain to assess the usefulness of such a tool. This paper also derives a set of design principles from our approach which provide guidance for the development of new approaches to process cohort performance comparison. © 2017 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
230c500bcde6657aeab12d5f85d8fc03
An Introduction to Physiological Player Metrics for Evaluating Games
[ { "docid": "f21b0f519f4bf46cb61b2dc2861014df", "text": "Player experience is difficult to evaluate and report, especially using quantitative methodologies in addition to observations and interviews. One step towards tying quantitative physiological measures of player arousal to player experience reports are Biometric Storyboards (BioSt). They can visualise meaningful relationships between a player's physiological changes and game events. This paper evaluates the usefulness of BioSt to the game industry. We presented the Biometric Storyboards technique to six game developers and interviewed them about the advantages and disadvantages of this technique.", "title": "" } ]
[ { "docid": "51d579a4d0d1fa3ea0be1ccfd3bb92a9", "text": "ÐThis paper describes a method for partitioning 3D surface meshes into useful segments. The proposed method generalizes morphological watersheds, an image segmentation technique, to 3D surfaces. This surface segmentation uses the total curvature of the surface as an indication of region boundaries. The surface is segmented into patches, where each patch has a relatively consistent curvature throughout, and is bounded by areas of higher, or drastically different, curvature. This algorithm has applications for a variety of important problems in visualization and geometrical modeling including 3D feature extraction, mesh reduction, texture mapping 3D surfaces, and computer aided design. Index TermsÐSurfaces, surface segmentation, watershed algorithm, curvature-based methods.", "title": "" }, { "docid": "4653bc89f67e1015919684d5ca732d8e", "text": "Visitors in Ragunan Zoo, often difficulties when trying to look for animals that want to visit. This difficulty will not happen if there is android -based mobile application that can guide visitors. Global Positioning System application such as Google Maps or GPS is used as an application that can inform our position on earth. Applications that are created not just to know \"where we are\" but has moved toward a more advanced system that can exploit the information for the convenience of users. GPS applications has been transformed into a pleasant traveling companion. Moreover, when visiting a city or a place that has never yet been visited, GPS can easily map a place that can drive well, so do not worry about getting lost in the city. Based on the idea of using GPS applications, create GPS application that can show the way and mapping of animal cages as well as information about the knowledge of each animal. This application is made to overcome the problems to occur and to further increase the number of visitors Ragunan Zoo.", "title": "" }, { "docid": "c1a76ba2114ec856320651489ee9b28b", "text": "The boost of available digital media has led to a significant increase in derivative work. With tools for manipulating objects becoming more and more mature, it can be very difficult to determine whether one piece of media was derived from another one or tampered with. As derivations can be done with malicious intent, there is an urgent need for reliable and easily usable tampering detection methods. However, even media considered semantically untampered by humans might have already undergone compression steps or light post-processing, making automated detection of tampering susceptible to false positives. In this paper, we present the PSBattles dataset which is gathered from a large community of image manipulation enthusiasts and provides a basis for media derivation and manipulation detection in the visual domain. The dataset consists of 102’028 images grouped into 11’142 subsets, each containing the original image as well as a varying number of manipulated derivatives.", "title": "" }, { "docid": "051603c7ee83c49b31428ce611de06c2", "text": "The Internet of Things (IoT) will feature pervasive sensing and control capabilities via a massive deployment of machine-type communication (MTC) devices. The limited hardware, low-complexity, and severe energy constraints of MTC devices present unique communication and security challenges. As a result, robust physical-layer security methods that can supplement or even replace lightweight cryptographic protocols are appealing solutions. In this paper, we present an overview of low-complexity physical-layer security schemes that are suitable for the IoT. A local IoT deployment is modeled as a composition of multiple sensor and data subnetworks, with uplink communications from sensors to controllers, and downlink communications from controllers to actuators. The state of the art in physical-layer security for sensor networks is reviewed, followed by an overview of communication network security techniques. We then pinpoint the most energy-efficient and low-complexity security techniques that are best suited for IoT sensing applications. This is followed by a discussion of candidate low-complexity schemes for communication security, such as on-off switching and space-time block codes. The paper concludes by discussing open research issues and avenues for further work, especially the need for a theoretically well-founded and holistic approach for incorporating complexity constraints in physical-layer security designs.", "title": "" }, { "docid": "ff09a72b95fbf3522d4df0f275fb5c3a", "text": "This paper provides a general overview of solid waste data and management practices employed in Turkey during the last decade. Municipal solid waste statistics and management practices including waste recovery and recycling initiatives have been evaluated. Detailed data on solid waste management practices including collection, recovery and disposal, together with the results of cost analyses, have been presented. Based on these evaluations basic cost estimations on collection and sorting of recyclable solid waste in Turkey have been provided. The results indicate that the household solid waste generation in Turkey, per capita, is around 0.6 kg/year, whereas municipal solid waste generation is close to 1 kg/year. The major constituents of municipal solid waste are organic in nature and approximately 1/4 of municipal solid waste is recyclable. Separate collection programmes for recyclable household waste by more than 60 municipalities, continuing in excess of 3 years, demonstrate solid evidence for public acceptance and continuing support from the citizens. Opinion polls indicate that more than 80% of the population in the project regions is ready and willing to participate in separate collection programmes. The analysis of output data of the Material Recovery Facilities shows that, although paper, including cardboard, is the main constituent, the composition of recyclable waste varies strongly by the source or the type of collection point.", "title": "" }, { "docid": "807a94db483f0ca72d3096e4897d2c76", "text": "A typical scene contains many different objects that, because of the limited processing capacity of the visual system, compete for neural representation. The competition among multiple objects in visual cortex can be biased by both bottom-up sensory-driven mechanisms and top-down influences, such as selective attention. Functional brain imaging studies reveal that, both in the absence and in the presence of visual stimulation, biasing signals due to selective attention can modulate neural activity in visual cortex in several ways. Although the competition among stimuli for representation is ultimately resolved within visual cortex, the source of top-down biasing signals derives from a network of areas in frontal and parietal cortex.", "title": "" }, { "docid": "5e2eee141595ae58ca69ee694dc51c8a", "text": "Evidence-based dietary information represented as unstructured text is a crucial information that needs to be accessed in order to help dietitians follow the new knowledge arrives daily with newly published scientific reports. Different named-entity recognition (NER) methods have been introduced previously to extract useful information from the biomedical literature. They are focused on, for example extracting gene mentions, proteins mentions, relationships between genes and proteins, chemical concepts and relationships between drugs and diseases. In this paper, we present a novel NER method, called drNER, for knowledge extraction of evidence-based dietary information. To the best of our knowledge this is the first attempt at extracting dietary concepts. DrNER is a rule-based NER that consists of two phases. The first one involves the detection and determination of the entities mention, and the second one involves the selection and extraction of the entities. We evaluate the method by using text corpora from heterogeneous sources, including text from several scientifically validated web sites and text from scientific publications. Evaluation of the method showed that drNER gives good results and can be used for knowledge extraction of evidence-based dietary recommendations.", "title": "" }, { "docid": "3a1bbaea6dae7f72a5276a32326884fe", "text": "Statistics suggests that there are around 40 cases per million of quadriplegia every year. Great people like Stephen Hawking have been suffering from this phenomenon. Our project attempts to make lives of the people suffering from this phenomenon simple by helping them move around on their own and not being a burden on others. The idea is to create an Eye Controlled System which enables the movement of the patient’s wheelchair depending on the movements of eyeball. A person suffering from quadriplegia can move his eyes and partially tilt his head, thus giving is an opportunity for detecting these movements. There are various kinds of interfaces developed for powered wheelchair and also there are various new techniques invented but these are costly and not affordable to the poor and needy people. In this paper, we have proposed the simpler and cost effective method of developing wheelchair. We have created a system wherein a person sitting on this automated Wheel Chair with a camera mounted on it, is able to move in a direction just by looking in that direction by making eye movements. The captured camera signals are then send to PC and controlled MATLAB, which will then be send to the Arduino circuit over the Serial Interface which in turn will control motors and allow the wheelchair to move in a particular direction. The system is affordable and hence can be used by patients spread over a large economy range. KeywordsAutomatic wheelchair, Iris Movement Detection, Servo Motor, Daugman’s algorithm, Arduino.", "title": "" }, { "docid": "7b25d1c4d20379a8a0fabc7398ea2c28", "text": "In this paper we introduce an efficient and stable implicit SPH method for the physically-based simulation of incompressible fluids. In the area of computer graphics the most efficient SPH approaches focus solely on the correction of the density error to prevent volume compression. However, the continuity equation for incompressible flow also demands a divergence-free velocity field which is neglected by most methods. Although a few methods consider velocity divergence, they are either slow or have a perceivable density fluctuation.\n Our novel method uses an efficient combination of two pressure solvers which enforce low volume compression (below 0.01%) and a divergence-free velocity field. This can be seen as enforcing incompressibility both on position level and velocity level. The first part is essential for realistic physical behavior while the divergence-free state increases the stability significantly and reduces the number of solver iterations. Moreover, it allows larger time steps which yields a considerable performance gain since particle neighborhoods have to be updated less frequently. Therefore, our divergence-free SPH (DFSPH) approach is significantly faster and more stable than current state-of-the-art SPH methods for incompressible fluids. We demonstrate this in simulations with millions of fast moving particles.", "title": "" }, { "docid": "80bb8f4af70a6c0b6dc5fd149c128154", "text": "The skin care product market is growing due to the threat of ultraviolet (UV) radiation caused by the destruction of the ozone layer, increasing demand for tanning, and the tendency to wear less clothing. Accordingly, there is a potential demand for a personalized UV monitoring device, which can play a fundamental role in skin cancer prevention by providing measurements of UV radiation intensities and corresponding recommendations. This paper highlights the development and initial validation of a wireless and portable embedded device for personalized UV monitoring which is based on a novel software architecture, a high-end UV sensor, and conventional PDA (or a cell phone). In terms of short-term applications, by calculating the UV index, it informs the users about their maximum recommended sun exposure time by taking their skin type and sun protection factor (SPF) of the applied sunscreen into consideration. As for long-term applications, given that the damage caused by UV light is accumulated over days, it displays the amount of UV received over a certain course of time, from a single day to a month.", "title": "" }, { "docid": "9a0530ae13507d14b66ee74ec05c43bd", "text": "The paper investigates the role of the government and self-regulatory reputation mechanisms to internalise externalities of market operation. If it pays off for companies to invest in a good reputation by an active policy of corporate social responsibility (CSR), external effects of the market will be (partly) internalised by the market itself. The strength of the reputation mechanism depends on the functioning of non governmental organisations (NGOs), the transparency of the company, the time horizon of the company, and on the behaviour of employees, consumers and investors. On the basis of an extensive study of the empirical literature on these topics, we conclude that in general the working of the reputation mechanism is rather weak. Especially the transparency of companies is a bottleneck. If the government would force companies to be more transparent, it could initiate a self-enforcing spiral that would improve the working of the reputation mechanism. We also argue that the working of the reputation mechanism will be weaker for smaller companies and for both highly competitive and monopolistic markets. We therefore conclude that government regulation is still necessary, especially for small companies. Tijdschrift voor Economie en Management Vol. XLIX, 2, 2004", "title": "" }, { "docid": "cd058902ed470efc022c328765a40b34", "text": "Secure signal authentication is arguably one of the most challenging problems in the Internet of Things (IoT), due to the large-scale nature of the system and its susceptibility to man-in-the-middle and data-injection attacks. In this paper, a novel watermarking algorithm is proposed for dynamic authentication of IoT signals to detect cyber-attacks. The proposed watermarking algorithm, based on a deep learning long short-term memory structure, enables the IoT devices (IoTDs) to extract a set of stochastic features from their generated signal and dynamically watermark these features into the signal. This method enables the IoT gateway, which collects signals from the IoTDs, to effectively authenticate the reliability of the signals. Moreover, in massive IoT scenarios, since the gateway cannot authenticate all of the IoTDs simultaneously due to computational limitations, a game-theoretic framework is proposed to improve the gateway’s decision making process by predicting vulnerable IoTDs. The mixed-strategy Nash equilibrium (MSNE) for this game is derived, and the uniqueness of the expected utility at the equilibrium is proven. In the massive IoT system, due to the large set of available actions for the gateway, the MSNE is shown to be analytically challenging to derive, and thus, a learning algorithm that converges to the MSNE is proposed. Moreover, in order to handle incomplete information scenarios, in which the gateway cannot access the state of the unauthenticated IoTDs, a deep reinforcement learning algorithm is proposed to dynamically predict the state of unauthenticated IoTDs and allow the gateway to decide on which IoTDs to authenticate. Simulation results show that with an attack detection delay of under 1 s, the messages can be transmitted from IoTDs with an almost 100% reliability. The results also show that by optimally predicting the set of vulnerable IoTDs, the proposed deep reinforcement learning algorithm reduces the number of compromised IoTDs by up to 30%, compared to an equal probability baseline.", "title": "" }, { "docid": "d842f25d20a85f19c63546501bc6699a", "text": "Microservices have been one of the fastest-rising trends in the development of enterprise applications and enterprise application landscapes. Even though various mapping studies investigated the open challenges around microservices from literature, it is difficult to have a clear view of existing challenges in designing, developing, and maintaining systems based on microservices architecture as it is perceived by practitioners. In this paper, we present the results of an empirical survey to assess the current state of practice and collect challenges in microservices architecture. Therefore, we synthesize the 25 collected results and produce a clear overview for answering our research questions. The result of our study can be a basis for planning future research and applications of microservices architecture.", "title": "" }, { "docid": "0a37fcb6c1fba747503fc4e3b5540680", "text": "In this paper we introduce the problem of predicting action progress in videos. We argue that this is an extremely important task because, on the one hand, it can be valuable for a wide range of applications and, on the other hand, it facilitates better action detection results. To solve this problem we introduce a novel approach, named ProgressNet, capable of predicting when an action takes place in a video, where it is located within the frames, and how far it has progressed during its execution. Motivated by the recent success obtained from the interaction of Convolutional and Recurrent Neural Networks, our model is based on a combination of the Faster R-CNN framework, to make framewise predictions, and LSTM networks, to estimate action progress through time. After introducing two evaluation protocols for the task at hand, we demonstrate the capability of our model to effectively predict action progress on the UCF-101 and J-HMDB datasets. Additionally, we show that exploiting action progress it is also possible to improve spatio-temporal localization.", "title": "" }, { "docid": "bd817e69a03da1a97e9c412b5e09eb33", "text": "The emergence of carbapenemase producing bacteria, especially New Delhi metallo-β-lactamase (NDM-1) and its variants, worldwide, has raised amajor public health concern. NDM-1 hydrolyzes a wide range of β-lactam antibiotics, including carbapenems, which are the last resort of antibiotics for the treatment of infections caused by resistant strain of bacteria. In this review, we have discussed bla NDM-1variants, its genetic analysis including type of specific mutation, origin of country and spread among several type of bacterial species. Wide members of enterobacteriaceae, most commonly Escherichia coli, Klebsiella pneumoniae, Enterobacter cloacae, and gram-negative non-fermenters Pseudomonas spp. and Acinetobacter baumannii were found to carry these markers. Moreover, at least seventeen variants of bla NDM-type gene differing into one or two residues of amino acids at distinct positions have been reported so far among different species of bacteria from different countries. The genetic and structural studies of these variants are important to understand the mechanism of antibiotic hydrolysis as well as to design new molecules with inhibitory activity against antibiotics. This review provides a comprehensive view of structural differences among NDM-1 variants, which are a driving force behind their spread across the globe.", "title": "" }, { "docid": "3c5e8575ca6c35c3f19c5c2b1a61565f", "text": "In this paper, a 77-GHz automotive radar sensor transceiver front-end module is packaged with a novel embedded wafer level packaging (EMWLP) technology. The bare transceiver die and the pre-fabricated through silicon via (TSV) chip are reconfigured to form a molded wafer through a compression molding process. The TSVs built on a high resistivity wafer serve as vertical interconnects, carrying radio-frequency (RF) signals up to 77 GHz. The RF path transitions are carefully designed to minimize the insertion loss in the frequency band of concern. The proposed EMWLP module also provides a platform to design integrated passive components. A substrate-integrated waveguide resonator is implemented with TSVs as the via fences, and it is later used to design a second-order 77-GHz high performance bandpass filter. Both the resonator and the bandpass filter are fabricated and measured, and the measurement results match with the simulation results very well.", "title": "" }, { "docid": "de7331c328ba54b7ddd8a542aec3b19f", "text": "Predicting the next location a user tends to visit is an important task for applications like location-based advertising, traffic planning, and tour recommendation. We consider the next location prediction problem for semantic trajectory data, wherein each GPS record is attached with a text message that describes the user's activity. In semantic trajectories, the confluence of spatiotemporal transitions and textual messages indicates user intents at a fine granularity and has great potential in improving location prediction accuracies. Nevertheless, existing methods designed for GPS trajectories fall short in capturing latent user intents for such semantics-enriched trajectory data. We propose a method named semantics-enriched recurrent model (SERM). SERM jointly learns the embeddings of multiple factors (user, location, time, keyword) and the transition parameters of a recurrent neural network in a unified framework. Therefore, it effectively captures semantics-aware spatiotemporal transition regularities to improve location prediction accuracies. Our experiments on two real-life semantic trajectory datasets show that SERM achieves significant improvements over state-of-the-art methods.", "title": "" }, { "docid": "1d53b01ee1a721895a17b7d0f3535a28", "text": "We present a suite of algorithms for self-organization of wireless sensor networks, in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. † This research is supported by DARPA contract number F04701-97-C-0010, and was presented in part at the 37 Allerton Conference on Communication, Computing and Control, September 1999. ‡ Corresponding author.", "title": "" }, { "docid": "bb41e52004782f8ead3549fb9d746e6d", "text": "A method to generate stable transconductance (gm) without using precise external components is presented. The off-chip resistor in a conventional constant-gm bias circuit is replaced with a variable on-chip resistor. A MOSFET biased in triode region is used as a variable resistor. The resistance of the MOSFET is tuned by a background tuning scheme to achieve the stable transconductance that is immune to process, voltage and temperature variation. The transconductance generated by the constant-gm bias circuit designed in 0.18mum CMOS process with 1.5F supply displays less than 1% variation for a 20% change in power supply voltage and less than plusmn1.5% variation for a 60degC change in temperature. The whole circuit draws approximately 850muA from a supply", "title": "" }, { "docid": "3228d57f3d74f56444ce7fb9ed18e042", "text": "Gaussian process (GP) models are widely used to perform Bayesian nonlinear regression and classification — tasks that are central to many machine learning problems. A GP is nonparametric, meaning that the complexity of the model grows as more data points are received. Another attractive feature is the behaviour of the error bars. They naturally grow in regions away from training data where we have high uncertainty about the interpolating function. In their standard form GPs have several limitations, which can be divided into two broad categories: computational difficulties for large data sets, and restrictive modelling assumptions for complex data sets. This thesis addresses various aspects of both of these problems. The training cost for a GP hasO(N3) complexity, whereN is the number of training data points. This is due to an inversion of the N × N covariance matrix. In this thesis we develop several new techniques to reduce this complexity to O(NM2), whereM is a user chosen number much smaller thanN . The sparse approximation we use is based on a set of M ‘pseudo-inputs’ which are optimised together with hyperparameters at training time. We develop a further approximation based on clustering inputs that can be seen as a mixture of local and global approximations. Standard GPs assume a uniform noise variance. We use our sparse approximation described above as a way of relaxing this assumption. By making a modification of the sparse covariance function, we can model input dependent noise. To handle high dimensional data sets we use supervised linear dimensionality reduction. As another extension of the standard GP, we relax the Gaussianity assumption of the process by learning a nonlinear transformation of the output space. All these techniques further increase the applicability of GPs to real complex data sets. We present empirical comparisons of our algorithms with various competing techniques, and suggest problem dependent strategies to follow in practice.", "title": "" } ]
scidocsrr
447e0cd3b3155c45bc6a3c37b7b65ed7
Recurrent Network Models of Sequence Generation and Memory
[ { "docid": "2065faf3e72a8853dd6cbba1daf9c64a", "text": "One of a good overview all the output neurons. The fixed point attractors have resulted in order to the attractor furthermore. As well as memory classification and all the basic ideas. Introducing the form of strange attractors or licence agreement may be fixed point! The above with input produces and the techniques brought from one of cognitive processes. The study of cpgs is the, global dynamics as nearest neighbor classifiers. Attractor networks encode knowledge of the, network will be ergodic so. These synapses will be applicable exploring one interesting and neural networks other technology professionals.", "title": "" } ]
[ { "docid": "9ca90172c5beff5922b4f5274ef61480", "text": "In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep-learning ecosystem to provide a tunable balance between performance, power consumption, and programmability. In this article, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics, which include the supported applications, architectural choices, design space exploration methods, and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete, and in-depth evaluation of CNN-to-FPGA toolflows.", "title": "" }, { "docid": "b59e90e5d1fa3f58014dedeea9d5b6e4", "text": "The results of vitrectomy in 240 consecutive cases of ocular trauma were reviewed. Of these cases, 71.2% were war injuries. Intraocular foreign bodies were present in 155 eyes, of which 74.8% were metallic and 61.9% ferromagnetic. Multivariate analysis identified the prognostic factors predictive of poor visual outcome, which included: (1) presence of an afferent pupillary defect; (2) double perforating injuries; and (3) presence of intraocular foreign bodies. Association of vitreous hemorrhage with intraocular foreign bodies was predictive of a poor prognosis. Eyes with foreign bodies retained in the anterior segment and vitreous had a better prognosis than those with foreign bodies embedded in the retina. Timing of vitrectomy and type of trauma had no significant effect on the final visual results. Prophylactic scleral buckling reduced the incidence of retinal detachment after surgery. Injuries confined to the cornea had a better prognosis than scleral injuries.", "title": "" }, { "docid": "60f6e3345aae1f91acb187ba698f073b", "text": "A Cube-Satellite (CubeSat) is a small satellite weighing no more than one kilogram. CubeSats are used for space research, but their low-rate communication capability limits functionality. As greater payload and instrumentation functions are sought, increased data rate is needed. Since most CubeSats currently transmit at a 437 MHz frequency, several directional antenna types were studied for a 2.45 GHz, larger bandwidth transmission. This higher frequency provides the bandwidth needed for increasing the data rate. A deployable antenna mechanism maybe needed because most directional antennas are bigger than the CubeSat size constraints. From the study, a deployable hemispherical helical antenna prototype was built. Transmission between two prototype antenna equipped transceivers at varying distances tested the helical performance. When comparing the prototype antenna's maximum transmission distance to the other commercial antennas, the prototype outperformed all commercial antennas, except the patch antenna. The root cause was due to the helical antenna's narrow beam width. Future work can be done in attaining a more accurate alignment with the satellite's directional antenna to downlink with a terrestrial ground station.", "title": "" }, { "docid": "238b1a142b406a7e736126582675ba67", "text": "It was hypothesized that relative group status and endorsement of ideologies that legitimize group status differences moderate attributions to discrimination in intergroup encounters. According to the status-legitimacy hypothesis, the more members of low-status groups endorse the ideology of individual mobility, the less likely they are to attribute negative outcomes from higher status group members to discrimination. In contrast, the more members of high-status groups endorse individual mobility, the more likely they are to attribute negative outcomes from lower status group members to discrimination. Results from 3 studies using 2 different methodologies provide support for this hypothesis among members of different high-status (European Americans and men) and low-status (African Americans, Latino Americans, and women) groups.", "title": "" }, { "docid": "4e4f19bbec96e8d0e94fb488d17af6dd", "text": "Covering: 2012 to 2016Metabolic engineering using systems biology tools is increasingly applied to overproduce secondary metabolites for their potential industrial production. In this Highlight, recent relevant metabolic engineering studies are analyzed with emphasis on host selection and engineering approaches for the optimal production of various prokaryotic secondary metabolites: native versus heterologous hosts (e.g., Escherichia coli) and rational versus random approaches. This comparative analysis is followed by discussions on systems biology tools deployed in optimizing the production of secondary metabolites. The potential contributions of additional systems biology tools are also discussed in the context of current challenges encountered during optimization of secondary metabolite production.", "title": "" }, { "docid": "d2d7595f04af96d7499d7b7c06ba2608", "text": "Deep Neural Network (DNN) is a widely used deep learning technique. How to ensure the safety of DNN-based system is a critical problem for the research and application of DNN. Robustness is an important safety property of DNN. However, existing work of verifying DNN’s robustness is timeconsuming and hard to scale to large-scale DNNs. In this paper, we propose a boosting method for DNN robustness verification, aiming to find counter-examples earlier. Our observation is DNN’s different inputs have different possibilities of existing counter-examples around them, and the input with a small difference between the largest output value and the second largest output value tends to be the achilles’s heel of the DNN. We have implemented our method and applied it on Reluplex, a state-ofthe-art DNN verification tool, and four DNN attacking methods. The results of the extensive experiments on two benchmarks indicate the effectiveness of our boosting method.", "title": "" }, { "docid": "d59a2c1673d093584c5f19212d6ba520", "text": "Introduction and Motivation Today, a majority of data is fundamentally distributed in nature. Data for almost any task is collected over a broad area, and streams in at a much greater rate than ever before. In particular, advances in sensor technology and miniaturization have led to the concept of the sensor network: a (typically wireless) collection of sensing devices collecting detailed data about their surroundings. A fundamental question arises: how to query and monitor this rich new source of data? Similar scenarios emerge within the context of monitoring more traditional, wired networks, and in other emerging models such as P2P networks and grid-based computing. The prevailing paradigm in database systems has been understanding management of centralized data: how to organize, index, access, and query data that is held centrally on a single machine or a small number of closely linked machines. In these distributed scenarios, the axiom is overturned: now, data typically streams into remote sites at high rates. Here, it is not feasible to collect the data in one place: the volume of data collection is too high, and the capacity for data communication relatively low. For example, in battery-powered wireless sensor networks, the main drain on battery life is communication, which is orders of magnitude more expensive than computation or sensing. This establishes a fundamental concept for distributed stream monitoring: if we can perform more computational work within the network to reduce the communication needed, then we can significantly improve the value of our network, by increasing its useful life and extending the range of computation possible over the network. We consider two broad classes of approaches to such in-network query processing, by analogy to query types in traditional DBMSs. In the one shot model, a query is issued by a user at some site, and must be answered based on the current state of data in the network. We identify several possible approaches to this problem. For simple queries, partial computation of the result over a tree can reduce the data transferred significantly. For “holistic” queries, such as medians, count distinct and so on, clever composable summaries give a compact way to accurately approximate query answers. Lastly, careful modeling of correlations between measurements and other trends in the data can further reduce the number of sensors probed. In the continuous model, a query is placed by a user which re-", "title": "" }, { "docid": "63115b12e4a8192fdce26eb7e2f8989a", "text": "Theorems and techniques to form different types of transformationally invariant processing and to produce the same output quantitatively based on either transformationally invariant operators or symmetric operations have recently been introduced by the authors. In this study, we further propose to compose a geared rotationally identical CNN system (GRI-CNN) with a small angle increment by connecting networks of participated processes at the first flatten layer. Using an ordinary CNN structure as a base, requirements for constructing a GRI-CNN include the use of either symmetric input vector or kernels with an angle increment that can form a complete cycle as a \"gearwheel\". Four basic GRI-CNN structures were studied. Each of them can produce quantitatively identical output results when a rotation angle of the input vector is evenly divisible by the increment angle of the gear. Our study showed when a rotated input vector does not match to a gear angle, the GRI-CNN can also produce a highly consistent result. With an ultrafine increment angle (e.g., 1 or 0.1), a virtually isotropic CNN system can be constructed.", "title": "" }, { "docid": "11d06fb5474df44a6bc733bd5cd1263d", "text": "Understanding how materials that catalyse the oxygen evolution reaction (OER) function is essential for the development of efficient energy-storage technologies. The traditional understanding of the OER mechanism on metal oxides involves four concerted proton-electron transfer steps on metal-ion centres at their surface and product oxygen molecules derived from water. Here, using in situ 18O isotope labelling mass spectrometry, we provide direct experimental evidence that the O2 generated during the OER on some highly active oxides can come from lattice oxygen. The oxides capable of lattice-oxygen oxidation also exhibit pH-dependent OER activity on the reversible hydrogen electrode scale, indicating non-concerted proton-electron transfers in the OER mechanism. Based on our experimental data and density functional theory calculations, we discuss mechanisms that are fundamentally different from the conventional scheme and show that increasing the covalency of metal-oxygen bonds is critical to trigger lattice-oxygen oxidation and enable non-concerted proton-electron transfers during OER.", "title": "" }, { "docid": "00be65f8f46d245d8629a1faa30772d7", "text": "Concretization is one of the most labor-intensive phases of the modelbased testing process. This study concentrates on concretization of the abstract tests generated from the test models. The purpose of the study is to design and implement a structure to automate this phase which can reduce the required effort specially in every system update. The structure is completed and discussed as an extension on a modelbased testing tool named ModelJUnit using adaptation approach. In this structure, the focus is mainly on bridging the gap in data-level between the SUT and the model.", "title": "" }, { "docid": "d99d4bdf1af85c14653c7bbde10eca7b", "text": "Plants endure a variety of abiotic and biotic stresses, all of which cause major limitations to production. Among abiotic stressors, heavy metal contamination represents a global environmental problem endangering humans, animals, and plants. Exposure to heavy metals has been documented to induce changes in the expression of plant proteins. Proteins are macromolecules directly responsible for most biological processes in a living cell, while protein function is directly influenced by posttranslational modifications, which cannot be identified through genome studies. Therefore, it is necessary to conduct proteomic studies, which enable the elucidation of the presence and role of proteins under specific environmental conditions. This review attempts to present current knowledge on proteomic techniques developed with an aim to detect the response of plant to heavy metal stress. Significant contributions to a better understanding of the complex mechanisms of plant acclimation to metal stress are also discussed.", "title": "" }, { "docid": "869cc834f84bc88a258b2d9d9d4f3096", "text": "Obesity is a multifactorial disease characterized by an excessive weight for height due to an enlarged fat deposition such as adipose tissue, which is attributed to a higher calorie intake than the energy expenditure. The key strategy to combat obesity is to prevent chronic positive impairments in the energy equation. However, it is often difficult to maintain energy balance, because many available foods are high-energy yielding, which is usually accompanied by low levels of physical activity. The pharmaceutical industry has invested many efforts in producing antiobesity drugs; but only a lipid digestion inhibitor obtained from an actinobacterium is currently approved and authorized in Europe for obesity treatment. This compound inhibits the activity of pancreatic lipase, which is one of the enzymes involved in fat digestion. In a similar way, hundreds of extracts are currently being isolated from plants, fungi, algae, or bacteria and screened for their potential inhibition of pancreatic lipase activity. Among them, extracts isolated from common foodstuffs such as tea, soybean, ginseng, yerba mate, peanut, apple, or grapevine have been reported. Some of them are polyphenols and saponins with an inhibitory effect on pancreatic lipase activity, which could be applied in the management of the obesity epidemic.", "title": "" }, { "docid": "8d350cc11997b6a0dc96c9fef2b1919f", "text": "Task-parameterized models of movements aims at automatically adapting movements to new situations encountered by a robot. The task parameters can for example take the form of positions of objects in the environment, or landmark points that the robot should pass through. This tutorial aims at reviewing existing approaches for task-adaptive motion encoding. It then narrows down the scope to the special case of task parameters that take the form of frames of reference, coordinate systems, or basis functions, which are most commonly encountered in service robotics. Each section of the paper is accompanied with source codes designed as simple didactic examples implemented in Matlab with a full compatibility with GNU Octave, closely following the notation and equations of the article. It also presents ongoing work and further challenges that remain to be addressed, with examples provided in simulation and on a real robot (transfer of manipulation behaviors to the Baxter bimanual robot). The repository for the accompanying source codes is available at http://www.idiap.ch/software/pbdlib/.", "title": "" }, { "docid": "3745ead7df976976f3add631ad175930", "text": "Natural products and traditional medicines are of great importance. Such forms of medicine as traditional Chinese medicine, Ayurveda, Kampo, traditional Korean medicine, and Unani have been practiced in some areas of the world and have blossomed into orderly-regulated systems of medicine. This study aims to review the literature on the relationship among natural products, traditional medicines, and modern medicine, and to explore the possible concepts and methodologies from natural products and traditional medicines to further develop drug discovery. The unique characteristics of theory, application, current role or status, and modern research of eight kinds of traditional medicine systems are summarized in this study. Although only a tiny fraction of the existing plant species have been scientifically researched for bioactivities since 1805, when the first pharmacologically-active compound morphine was isolated from opium, natural products and traditional medicines have already made fruitful contributions for modern medicine. When used to develop new drugs, natural products and traditional medicines have their incomparable advantages, such as abundant clinical experiences, and their unique diversity of chemical structures and biological activities.", "title": "" }, { "docid": "f29f529ee14f4ae90ebb08ba26f8a8c1", "text": "After completing this article, the reader should be able to:  Describe the various biopsy types that require specimen imaging.  List methods of guiding biopsy procedures.  Explain the reasons behind specimen imaging.  Describe various methods for imaging specimens.", "title": "" }, { "docid": "a39c0db041f31370135462af467426ed", "text": "Part of the ventral temporal lobe is thought to be critical for face perception, but what determines this specialization remains unknown. We present evidence that expertise recruits the fusiform gyrus 'face area'. Functional magnetic resonance imaging (fMRI) was used to measure changes associated with increasing expertise in brain areas selected for their face preference. Acquisition of expertise with novel objects (greebles) led to increased activation in the right hemisphere face areas for matching of upright greebles as compared to matching inverted greebles. The same areas were also more activated in experts than in novices during passive viewing of greebles. Expertise seems to be one factor that leads to specialization in the face area.", "title": "" }, { "docid": "d7e7cdc9ac55d5af199395becfe02d73", "text": "Text recognition in images is a research area which attempts to develop a computer system with the ability to automatically read the text from images. These days there is a huge demand in storing the information available in paper documents format in to a computer storage disk and then later reusing this information by searching process. One simple way to store information from these paper documents in to computer system is to first scan the documents and then store them as images. But to reuse this information it is very difficult to read the individual contents and searching the contents form these documents line-by-line and word-by-word. The challenges involved in this the font characteristics of the characters in paper documents and quality of images. Due to these challenges, computer is unable to recognize the characters while reading them. Thus there is a need of character recognition mechanisms to perform Document Image Analysis (DIA) which transforms documents in paper format to electronic format. In this paper we have discuss method for text recognition from images. The objective of this paper is to recognition of text from image for better understanding of the reader by using particular sequence of different processing module.", "title": "" }, { "docid": "057a521ce1b852591a44417e788e4541", "text": "We introduce InfraStructs, material-based tags that embed information inside digitally fabricated objects for imaging in the Terahertz region. Terahertz imaging can safely penetrate many common materials, opening up new possibilities for encoding hidden information as part of the fabrication process. We outline the design, fabrication, imaging, and data processing steps to fabricate information inside physical objects. Prototype tag designs are presented for location encoding, pose estimation, object identification, data storage, and authentication. We provide detailed analysis of the constraints and performance considerations for designing InfraStruct tags. Future application scenarios range from production line inventory, to customized game accessories, to mobile robotics.", "title": "" }, { "docid": "f10ce9ef67abec42deeabbf98f7f7cd8", "text": "In this paper we first deal with the design and operational control of Automated Guided Vehicle (AGV) systems, starting from the literature on these topics. Three main issues emerge: track layout, the number of AGVs required and operational transportation control. An hierarchical queueing network approach to determine the number of AGVs is decribed. Also basic concepts are presented for the transportation control of both a job-shop and a flow-shop. Next we report on the results of a case study, in which track layout and transportation control are the main issues. Finally we suggest some topics for further research.", "title": "" }, { "docid": "20b00a2cc472dfec851f4aea42578a9e", "text": "The self-regulatory strength model maintains that all acts of self-regulation, self-control, and choice result in a state of fatigue called ego-depletion. Self-determination theory differentiates between autonomous regulation and controlled regulation. Because making decisions represents one instance of self-regulation, the authors also differentiate between autonomous choice and controlled choice. Three experiments support the hypothesis that whereas conditions representing controlled choice would be egodepleting, conditions that represented autonomous choice would not. In Experiment 3, the authors found significant mediation by perceived self-determination of the relation between the choice condition (autonomous vs. controlled) and ego-depletion as measured by performance.", "title": "" } ]
scidocsrr
5819b7ff73e9e77f30f5d417903402e5
Publications Received
[ { "docid": "026a0651177ee631a80aaa7c63a1c32f", "text": "This paper is an introduction to natural language interfaces to databases (Nlidbs). A brief overview of the history of Nlidbs is rst given. Some advantages and disadvantages of Nlidbs are then discussed, comparing Nlidbs to formal query languages, form-based interfaces, and graphical interfaces. An introduction to some of the linguistic problems Nlidbs have to confront follows, for the beneet of readers less familiar with computational linguistics. The discussion then moves on to Nlidb architectures, porta-bility issues, restricted natural language input systems (including menu-based Nlidbs), and Nlidbs with reasoning capabilities. Some less explored areas of Nlidb research are then presented, namely database updates, meta-knowledge questions, temporal questions, and multi-modal Nlidbs. The paper ends with reeections on the current state of the art.", "title": "" } ]
[ { "docid": "4fc356024295824f6c68360bf2fcb860", "text": "Detecting depression is a key public health challenge, as almost 12% of all disabilities can be attributed to depression. Computational models for depression detection must prove not only that can they detect depression, but that they can do it early enough for an intervention to be plausible. However, current evaluations of depression detection are poor at measuring model latency. We identify several issues with the currently popular ERDE metric, and propose a latency-weighted F1 metric that addresses these concerns. We then apply this evaluation to several models from the recent eRisk 2017 shared task on depression detection, and show how our proposed measure can better capture system differences.", "title": "" }, { "docid": "96b47f766be916548226abac36b8f318", "text": "Deep learning approaches have achieved state-of-the-art performance in cardiac magnetic resonance (CMR) image segmentation. However, most approaches have focused on learning image intensity features for segmentation, whereas the incorporation of anatomical shape priors has received less attention. In this paper, we combine a multi-task deep learning approach with atlas propagation to develop a shape-constrained bi-ventricular segmentation pipeline for short-axis CMR volumetric images. The pipeline first employs a fully convolutional network (FCN) that learns segmentation and landmark localisation tasks simultaneously. The architecture of the proposed FCN uses a 2.5D representation, thus combining the computational advantage of 2D FCNs networks and the capability of addressing 3D spatial consistency without compromising segmentation accuracy. Moreover, the refinement step is designed to explicitly enforce a shape constraint and improve segmentation quality. This step is effective for overcoming image artefacts (e.g. due to different breath-hold positions and large slice thickness), which preclude the creation of anatomically meaningful 3D cardiac shapes. The proposed pipeline is fully automated, due to network’s ability to infer landmarks, which are then used downstream in the pipeline to initialise atlas propagation. We validate the pipeline on 1831 healthy subjects and 649 subjects with pulmonary hypertension. Extensive numerical experiments on the two datasets demonstrate that our proposed method is robust and capable of producing accurate, high-resolution and anatomically smooth bi-ventricular 3D models, despite the artefacts in input CMR volumes.", "title": "" }, { "docid": "8e3aef1e18f1db603368a32be0ed9fab", "text": "IT departments are under pressure to serve their enterprises by professionalizing their business intelligence (BI) operation. Companies can only be effective when their systematic and structured approach to BI is linked into the business itself.", "title": "" }, { "docid": "48a78ce66c4cc2205a39ba25b2710e33", "text": "Viable tumor cells actively release vesicles into the peripheral circulation and other biologic fluids, which exhibit proteins and RNAs characteristic of that cell. Our group demonstrated the presence of these extracellular vesicles of tumor origin within the peripheral circulation of cancer patients and proposed their utility for diagnosing the presence of tumors and monitoring their response to therapy in the 1970s. However, it has only been in the past 10 years that these vesicles have garnered interest based on the recognition that they serve as essential vehicles for intercellular communication, are key determinants of the immunosuppressive microenvironment observed in cancer and provide stability to tumor-derived components that can serve as diagnostic biomarkers. To date, the clinical utility of extracellular vesicles has been hampered by issues with nomenclature and methods of isolation. The term \"exosomes\" was introduced in 1981 to denote any nanometer-sized vesicles released outside the cell and to differentiate them from intracellular vesicles. Based on this original definition, we use \"exosomes\" as synonymous with \"extracellular vesicles.\" While our original studies used ultracentrifugation to isolate these vesicles, we immediately became aware of the significant impact of the isolation method on the number, type, content and integrity of the vesicles isolated. In this review, we discuss and compare the most commonly utilized methods for purifying exosomes for post-isolation analyses. The exosomes derived from these approaches have been assessed for quantity and quality of specific RNA populations and specific marker proteins. These results suggest that, while each method purifies exosomal material, there are pros and cons of each and there are critical issues linked with centrifugation-based methods, including co-isolation of non-exosomal materials, damage to the vesicle's membrane structure and non-standardized parameters leading to qualitative and quantitative variability. The down-stream analyses of these resulting varying exosomes can yield misleading results and conclusions.", "title": "" }, { "docid": "c125a4a70a6b347456f2e22c0899e84e", "text": "Fenotropil and its structural analog--compound RGPU-95 to a greater extent reduce the severity of anxious and depressive behavior in male rats than in females. On expression of the anxiolytic compound RGPU-95 significantly exceeds Fenotropil, but inferior to Diazepam; of antidepressant activity--comparable to Melipramin and exceeds Fenotropil.", "title": "" }, { "docid": "7b548e0e1e02e3a3150d0fac19d6f6fd", "text": "The paper presents a new torque-controlled lightweight robot for medical procedures developed at the Institute of Robotics and Mechatronics of the German Aerospace Center. Based on the experiences in lightweight robotics and anthropomorphic robotic hands, a small robot arm with 7 axis and torque-controlled joints tailored to surgical procedures has been designed. With an optimized anthropomorphic kinematics, integrated multi-modal sensors and flexible robot control architecture, the first prototype KINEMEDIC and the new generation MIRO, enhanced for endoscopic surgery, can easily be adapted to a wide range of different medical procedures and scenarios by the use of specialized instruments and compiling workflows within the robot control. With the options of both, Cartesian impedance and position control, MIRO is suited for tele-manipulation, shared autonomy and completely autonomous procedures. This paper focuses on system and hardware design of the robot, supplemented with a brief description on new specific control methods for the MIRO robot.", "title": "" }, { "docid": "2f7e5807415398cb95f8f1ab36a0438f", "text": "We present a Convolutional Neural Network (CNN) regression based framework for 2-D/3-D medical image registration, which directly estimates the transformation parameters from image features extracted from the DRR and the X-ray images using learned hierarchical regressors. Our framework consists of learning and application stages. In the learning stage, CNN regressors are trained using supervised machine learning to reveal the correlation between the transformation parameters and the image features. In the application stage, CNN regressors are applied on extracted image features in a hierarchical manner to estimate the transformation parameters. Our experiment results demonstrate that the proposed method can achieve real-time 2-D/3-D registration with very high (i.e., sub-milliliter) accuracy.", "title": "" }, { "docid": "931969dc54170c203db23f55b45dfa38", "text": "The popularity and influence of reviews, make sites like Yelp ideal targets for malicious behaviors. We present Marco, a novel system that exploits the unique combination of social, spatial and temporal signals gleaned from Yelp, to detect venues whose ratings are impacted by fraudulent reviews. Marco increases the cost and complexity of attacks, by imposing a tradeoff on fraudsters, between their ability to impact venue ratings and their ability to remain undetected. We contribute a new dataset to the community, which consists of both ground truth and gold standard data. We show that Marco significantly outperforms state-of-the-art approaches, by achieving 94% accuracy in classifying reviews as fraudulent or genuine, and 95.8% accuracy in classifying venues as deceptive or legitimate. Marco successfully flagged 244 deceptive venues from our large dataset with 7,435 venues, 270,121 reviews and 195,417 users. Furthermore, we use Marco to evaluate the impact of Yelp events, organized for elite reviewers, on the hosting venues. We collect data from 149 Yelp elite events throughout the US. We show that two weeks after an event, twice as many hosting venues experience a significant rating boost rather than a negative impact. © 2015 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 0: 000–000, 2015", "title": "" }, { "docid": "fddadfbc6c1b34a8ac14f8973f052da5", "text": "Abstract. Centroidal Voronoi tessellations are useful for subdividing a region in Euclidean space into Voronoi regions whose generators are also the centers of mass, with respect to a prescribed density function, of the regions. Their extensions to general spaces and sets are also available; for example, tessellations of surfaces in a Euclidean space may be considered. In this paper, a precise definition of such constrained centroidal Voronoi tessellations (CCVTs) is given and a number of their properties are derived, including their characterization as minimizers of an “energy.” Deterministic and probabilistic algorithms for the construction of CCVTs are presented and some analytical results for one of the algorithms are given. Computational examples are provided which serve to illustrate the high quality of CCVT point sets. Finally, CCVT point sets are applied to polynomial interpolation and numerical integration on the sphere.", "title": "" }, { "docid": "2839c318c9c2644edbd3e175bf9027b9", "text": "Multiple human tracking (MHT) is a fundamental task in many computer vision applications. Appearance-based approaches, primarily formulated on RGB data, are constrained and affected by problems arising from occlusions and/or illumination variations. In recent years, the arrival of cheap RGB-Depth (RGB-D) devices has led to many new approaches to MHT, and many of these integrate color and depth cues to improve each and every stage of the process. In this survey, we present the common processing pipeline of these methods and review their methodology based (a) on how they implement this pipeline and (b) on what role depth plays within each stage of it. We identify and introduce existing, publicly available, benchmark datasets and software resources that fuse color and depth data for MHT. Finally, we present a brief comparative evaluation of the performance of those works that have applied their methods to these datasets.", "title": "" }, { "docid": "eaca5794d84a96f8c8e7807cf83c3f00", "text": "Background Women represent 15% of practicing general surgeons. Gender-based discrimination has been implicated as discouraging women from surgery. We sought to determine women's perceptions of gender-based discrimination in the surgical training and working environment. Methods Following IRB approval, we fielded a pilot survey measuring perceptions and impact of gender-based discrimination in medical school, residency training, and surgical practice. It was sent electronically to 1,065 individual members of the Association of Women Surgeons. Results We received 334 responses from medical students, residents, and practicing physicians with a response rate of 31%. Eighty-seven percent experienced gender-based discrimination in medical school, 88% in residency, and 91% in practice. Perceived sources of gender-based discrimination included superiors, physician peers, clinical support staff, and patients, with 40% emanating from women and 60% from men. Conclusions The majority of responses indicated perceived gender-based discrimination during medical school, residency, and practice. Gender-based discrimination comes from both sexes and has a significant impact on women surgeons.", "title": "" }, { "docid": "b458269a0bc4a2d4bfc748ff07ffa753", "text": "Meta-analysis may be used to estimate an overall effect across a number of similar studies. A number of statistical techniques are currently used to combine individual study results. The simplest of these is based on a fixed effects model, which assumes the true effect is the same for all studies. A random effects model, however, allows the true effect to vary across studies, with the mean true effect the parameter of interest. We consider three methods currently used for estimation within the framework of a random effects model, and illustrate them by applying each method to a collection of six studies on the effect of aspirin after myocardial infarction. These methods are compared using estimated coverage probabilities of confidence intervals for the overall effect. The techniques considered all generally have coverages below the nominal level, and in particular it is shown that the commonly used DerSimonian and Laird method does not adequately reflect the error associated with parameter estimation, especially when the number of studies is small.", "title": "" }, { "docid": "9f0206aca2f3cccfb2ca1df629c32c7a", "text": "Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that \"All models are wrong but some are useful.\" We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a \"do it yourself kit\" for explanations, allowing a practitioner to directly answer \"what if questions\" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.", "title": "" }, { "docid": "ab0d19b1cb4a0f5d283f67df35c304f4", "text": "OBJECTIVE\nWe compared temperament and character traits in children and adolescents with bipolar disorder (BP) and healthy control (HC) subjects.\n\n\nMETHOD\nSixty nine subjects (38 BP and 31 HC), 8-17 years old, were assessed with the Kiddie Schedule for Affective Disorders and Schizophrenia-Present and Lifetime. Temperament and character traits were measured with parent and child versions of the Junior Temperament and Character Inventory.\n\n\nRESULTS\nBP subjects scored higher on novelty seeking, harm avoidance, and fantasy subscales, and lower on reward dependence, persistence, self-directedness, and cooperativeness compared to HC (all p < 0.007), by child and parent reports. These findings were consistent in both children and adolescents. Higher parent-rated novelty seeking, lower self-directedness, and lower cooperativeness were associated with co-morbid attention-deficit/hyperactivity disorder (ADHD). Lower parent-rated reward dependence was associated with co-morbid conduct disorder, and higher child-rated persistence was associated with co-morbid anxiety.\n\n\nCONCLUSIONS\nThese findings support previous reports of differences in temperament in BP children and adolescents and may assist in a greater understating of BP children and adolescents beyond mood symptomatology.", "title": "" }, { "docid": "1898ce1b6cb3a195de2d261bfd8bd7ce", "text": "Unmanned aerial vehicles (UAV) are commonly used for missions in unknown environments, where an exact mathematical model of the environment may not be available. This paper provides a framework for using reinforcement learning to allow the UAV to navigate successfully in such environments. We conducted our simulation and real implementation to show how the UAVs can successfully learn to navigate through an unknown environment. Technical aspects regarding to applying reinforcement learning algorithm to a UAV system and UAV flight control were also addressed. This will enable continuing research using a UAV with learning capabilities in more important applications, such as wildfire monitoring, or search and rescue missions.", "title": "" }, { "docid": "a2f062482157efb491ca841cc68b7fd3", "text": "Coping with malware is getting more and more challenging, given their relentless growth in complexity and volume. One of the most common approaches in literature is using machine learning techniques, to automatically learn models and patterns behind such complexity, and to develop technologies to keep pace with malware evolution. This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis in Windows environments, i.e. for the analysis of Portable Executables. We systematize surveyed papers according to their objectives (i.e., the expected output), what information about malware they specifically use (i.e., the features), and what machine learning techniques they employ (i.e., what algorithm is used to process the input and produce the output). We also outline a number of issues and challenges, including those concerning the used datasets, and identify the main current topical trends and how to possibly advance them. In particular, we introduce the novel concept of malware analysis economics, regarding the study of existing trade-offs among key metrics, such as analysis accuracy and economical costs.", "title": "" }, { "docid": "9f5f3423e062721e79c20db9710a986d", "text": "Reliable traffic light detection and classification is crucial for automated driving in urban environments. Currently, there are no systems that can reliably perceive traffic lights in real-time, without map-based information, and in sufficient distances needed for smooth urban driving. We propose a complete system consisting of a traffic light detector, tracker, and classifier based on deep learning, stereo vision, and vehicle odometry which perceives traffic lights in real-time. Within the scope of this work, we present three major contributions. The first is an accurately labeled traffic light dataset of 5000 images for training and a video sequence of 8334 frames for evaluation. The dataset is published as the Bosch Small Traffic Lights Dataset and uses our results as baseline. It is currently the largest publicly available labeled traffic light dataset and includes labels down to the size of only 1 pixel in width. The second contribution is a traffic light detector which runs at 10 frames per second on 1280×720 images. When selecting the confidence threshold that yields equal error rate, we are able to detect traffic lights as small as 4 pixels in width. The third contribution is a traffic light tracker which uses stereo vision and vehicle odometry to compute the motion estimate of traffic lights and a neural network to correct the aforementioned motion estimate.", "title": "" }, { "docid": "b6cc88bc123a081d580c9430c0ad0207", "text": "This paper presents a comparative survey of research activities and emerging technologies of solid-state fault current limiters for power distribution systems.", "title": "" }, { "docid": "9653346c41cab4e22c9987586bb155c1", "text": "The focus of the great majority of climate change impact studies is on changes in mean climate. In terms of climate model output, these changes are more robust than changes in climate variability. By concentrating on changes in climate means, the full impacts of climate change on biological and human systems are probably being seriously underestimated. Here, we briefly review the possible impacts of changes in climate variability and the frequency of extreme events on biological and food systems, with a focus on the developing world. We present new analysis that tentatively links increases in climate variability with increasing food insecurity in the future. We consider the ways in which people deal with climate variability and extremes and how they may adapt in the future. Key knowledge and data gaps are highlighted. These include the timing and interactions of different climatic stresses on plant growth and development, particularly at higher temperatures, and the impacts on crops, livestock and farming systems of changes in climate variability and extreme events on pest-weed-disease complexes. We highlight the need to reframe research questions in such a way that they can provide decision makers throughout the food system with actionable answers, and the need for investment in climate and environmental monitoring. Improved understanding of the full range of impacts of climate change on biological and food systems is a critical step in being able to address effectively the effects of climate variability and extreme events on human vulnerability and food security, particularly in agriculturally based developing countries facing the challenge of having to feed rapidly growing populations in the coming decades.", "title": "" }, { "docid": "5536f306c3633874299be57a19e35c01", "text": "0957-4174/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.04.023 ⇑ Corresponding author. Tel.: +55 8197885665. E-mail addresses: rflm@cin.ufpe.br (Rafael Ferreira), lscabral@gmail.com (L. de Souza Cabral), rdl@cin.ufpe.br (R.D. Lins), gfps.cin@gmail.com (G. Pereira e Silva), fred@cin.ufpe.br (F. Freitas), gdcc@cin.ufpe.br (G.D.C. Cavalcanti), rjlima01@gmail. com (R. Lima), steven.simske@hp.com (S.J. Simske), luciano.favaro@hp.com (L. Favaro). Rafael Ferreira a,⇑, Luciano de Souza Cabral , Rafael Dueire Lins , Gabriel Pereira e Silva , Fred Freitas , George D.C. Cavalcanti , Rinaldo Lima , Steven J. Simske , Luciano Favaro c", "title": "" } ]
scidocsrr
ec4cfc49f33587433f421a7dabc2003d
A Critical Evaluation of Website Fingerprinting Attacks
[ { "docid": "1272ee56c591f882c07817686621c0f8", "text": "Low-latency anonymization networks such as Tor and JAP claim to hide the recipient and the content of communications from a local observer, i.e., an entity that can eavesdrop the traffic between the user and the first anonymization node. Especially users in totalitarian regimes strongly depend on such networks to freely communicate. For these people, anonymity is particularly important and an analysis of the anonymization methods against various attacks is necessary to ensure adequate protection. In this paper we show that anonymity in Tor and JAP is not as strong as expected so far and cannot resist website fingerprinting attacks under certain circumstances. We first define features for website fingerprinting solely based on volume, time, and direction of the traffic. As a result, the subsequent classification becomes much easier. We apply support vector machines with the introduced features. We are able to improve recognition results of existing works on a given state-of-the-art dataset in Tor from 3% to 55% and in JAP from 20% to 80%. The datasets assume a closed-world with 775 websites only. In a next step, we transfer our findings to a more complex and realistic open-world scenario, i.e., recognition of several websites in a set of thousands of random unknown websites. To the best of our knowledge, this work is the first successful attack in the open-world scenario. We achieve a surprisingly high true positive rate of up to 73% for a false positive rate of 0.05%. Finally, we show preliminary results of a proof-of-concept implementation that applies camouflage as a countermeasure to hamper the fingerprinting attack. For JAP, the detection rate decreases from 80% to 4% and for Tor it drops from 55% to about 3%.", "title": "" } ]
[ { "docid": "cad72a5b8831796d2cef5bd256b821b1", "text": "This paper presents a linear chirp generator for synthesizing ultra-wideband signals for use in an FM-CW radar being used for airborne snow thickness measurements. Ultra-wideband chirp generators with rigorous linearity requirements are needed for long-range FMCW radars. The chirp generator is composed of a direct digital synthesizer and a frequency multiplier chain. The implementation approach combines recently available high-speed digital, mixed signal, and microwave components along with a frequency pre-distortion technique to synthesize a 6-GHz chirp signal over 240 μs with a <;0.02 MHz/μs deviation from linearity.", "title": "" }, { "docid": "3129b636e3739281ba59721765eeccb9", "text": "Despite the rapid adoption of Facebook as a means of photo sharing, minimal research has been conducted to understand user gratification behind this activity. In order to address this gap, the current study examines users’ gratifications in sharing photos on Facebook by applying Uses and Gratification (U&G) theory. An online survey completed by 368 respondents identified six different gratifications, namely, affection, attention seeking, disclosure, habit, information sharing, and social influence, behind sharing digital photos on Facebook. Some of the study’s prominent findings were: age was in positive correlation with disclosure and social influence gratifications; gender differences were identified among habit and disclosure gratifications; number of photos shared was negatively correlated with habit and information sharing gratifications. The study’s implications can be utilized to refine existing and develop new features and services bridging digital photos and social networking services.", "title": "" }, { "docid": "861e7e5b518681d8f09de17feb637bb7", "text": "Innovation starts with people, making the human capital within the workforce decisive. In a fastchanging knowledge economy, 21st-century digital skills drive organizations' competitiveness and innovation capacity. Although such skills are seen as crucial, the digital aspect integrated with 21stcentury skills is not yet sufficiently defined. The main objectives of this study were to (1) examine the relation between 21st-century skills and digital skills; and (2) provide a framework of 21st-century digital skills with conceptual dimensions and key operational components aimed at the knowledge worker. A systematic literature review was conducted to synthesize the relevant academic literature concerned with 21st-century digital skills. In total, 1592 different articles were screened from which 75 articles met the predefined inclusion criteria. The results show that 21st-century skills are broader than digital skills e the list of mentioned skills is far more extensive. In addition, in contrast to digital skills, 21st-century skills are not necessarily underpinned by ICT. Furthermore, we identified seven core skills: technical, information management, communication, collaboration, creativity, critical thinking and problem solving. Five contextual skills were also identified: ethical awareness, cultural awareness, flexibility, selfdirection and lifelong learning. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fdbe390730b949ccaa060a84257af2f1", "text": "An increase in the prevalence of chronic disease has led to a rise in the demand for primary healthcare services in many developed countries. Healthcare technology tools may provide the leverage to alleviate the shortage of primary care providers. Here we describe the development and usage of an automated healthcare kiosk for the management of patients with stable chronic disease in the primary care setting. One-hundred patients with stable chronic disease were recruited from a primary care clinic. They used a kiosk in place of doctors’ consultations for two subsequent follow-up visits. Patient and physician satisfaction with kiosk usage were measured on a Likert scale. Kiosk blood pressure measurements and triage decisions were validated and optimized. Patients were assessed if they could use the kiosk independently. Patients and physicians were satisfied with all areas of kiosk usage. Kiosk triage decisions were accurate by the 2nd month of the study. Blood pressure measurements by the kiosk were equivalent to that taken by a nurse (p = 0.30, 0.14). Independent kiosk usage depended on patients’ language skills and educational levels. Healthcare kiosks represent an alternative way to manage patients with stable chronic disease. They have the potential to replace physician visits and improve access to primary healthcare. Patients welcome the use of healthcare technology tools, including those with limited literacy and education. Optimization of environmental and patient factors may be required prior to the implementation of kiosk-based technology in the healthcare setting.", "title": "" }, { "docid": "9ffd665d6fe680fc4e7b9e57df48510c", "text": "BACKGROUND\nIn light of the increasing rate of dengue infections throughout the world despite vector-control measures, several dengue vaccine candidates are in development.\n\n\nMETHODS\nIn a phase 3 efficacy trial of a tetravalent dengue vaccine in five Latin American countries where dengue is endemic, we randomly assigned healthy children between the ages of 9 and 16 years in a 2:1 ratio to receive three injections of recombinant, live, attenuated, tetravalent dengue vaccine (CYD-TDV) or placebo at months 0, 6, and 12 under blinded conditions. The children were then followed for 25 months. The primary outcome was vaccine efficacy against symptomatic, virologically confirmed dengue (VCD), regardless of disease severity or serotype, occurring more than 28 days after the third injection.\n\n\nRESULTS\nA total of 20,869 healthy children received either vaccine or placebo. At baseline, 79.4% of an immunogenicity subgroup of 1944 children had seropositive status for one or more dengue serotypes. In the per-protocol population, there were 176 VCD cases (with 11,793 person-years at risk) in the vaccine group and 221 VCD cases (with 5809 person-years at risk) in the control group, for a vaccine efficacy of 60.8% (95% confidence interval [CI], 52.0 to 68.0). In the intention-to-treat population (those who received at least one injection), vaccine efficacy was 64.7% (95% CI, 58.7 to 69.8). Serotype-specific vaccine efficacy was 50.3% for serotype 1, 42.3% for serotype 2, 74.0% for serotype 3, and 77.7% for serotype 4. Among the severe VCD cases, 1 of 12 was in the vaccine group, for an intention-to-treat vaccine efficacy of 95.5%. Vaccine efficacy against hospitalization for dengue was 80.3%. The safety profile for the CYD-TDV vaccine was similar to that for placebo, with no marked difference in rates of adverse events.\n\n\nCONCLUSIONS\nThe CYD-TDV dengue vaccine was efficacious against VCD and severe VCD and led to fewer hospitalizations for VCD in five Latin American countries where dengue is endemic. (Funded by Sanofi Pasteur; ClinicalTrials.gov number, NCT01374516.).", "title": "" }, { "docid": "d45c7f39c315bf5e8eab3052e75354bb", "text": "Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging. Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images require the predictive model to build an intricate understanding of the natural world. Many existing methods tackle this problem by making simplifying assumptions about the environment. One common assumption is that the outcome is deterministic and there is only one plausible future. This can lead to low-quality predictions in real-world settings with stochastic dynamics. In this paper, we develop a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables. To the best of our knowledge, our model is the first to provide effective stochastic multi-frame prediction for real-world videos. We demonstrate the capability of the proposed method in predicting detailed future frames of videos on multiple real-world datasets, both action-free and action-conditioned. We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods. Our SV2P implementation will be open sourced upon publication.", "title": "" }, { "docid": "c357b9646e31e2d881c0832983593516", "text": "The history of digital image compositing—other than simple digital implementation of known film art—is essentially the history of the alpha channel. Distinctions are drawn between digital printing and digital compositing, between matte creation and matte usage, and between (binary) masking and (subtle) matting. The history of the integral alpha channel and premultiplied alpha ideas are presented and their importance in the development of digital compositing in its current modern form is made clear. Basic Definitions Digital compositing is often confused with several related technologies. Here we distinguish compositing from printing and matte creation—eg, blue-screen matting. Printing v Compositing Digital film printing is the transfer, under digital computer control, of an image stored in digital form to standard chemical, analog movie film. It requires a sophisticated understanding of film characteristics, light source characteristics, precision film movements, film sizes, filter characteristics, precision scanning devices, and digital computer control. We had to solve all these for the Lucasfilm laser-based digital film printer—that happened to be a digital film input scanner too. My colleague David DiFrancesco was honored by the Academy of Motion Picture Art and Sciences last year with a technical award for his achievement on the scanning side at Lucasfilm (along with Gary Starkweather). Also honored was Gary Demos for his CRT-based digital film scanner (along with Dan Cameron). Digital printing is the generalization of this technology to other media, such as video and paper. Digital film compositing is the combining of two or more strips of film—in digital form—to create a resulting strip of film—in digital form—that is the composite of the components. For example, several spacecraft may have been filmed, one per film strip in its separate motion, and a starfield may have also been filmed. Then a digital film compositing step is performed to combine the separate spacecrafts over the starfield. The important point is that none of the technology mentioned above for digital film printing is involved in the digital compositing process. The separate spacecraft elements are digitally represented, and the starfield is digitally represented, so the composite is a strictly digital computation. Digital compositing is the generalization of this technology to other media. Alpha and the History of Digital Compositing Microsoft Tech Memo 7 Alvy 2 This only means that the digital images being combined are represented in resolutions appropriate to their intended final output medium; the compositing techniques involved are the same regardless of output medium being, after all, digital computations. No knowledge of film characteristics, light sources characteristics, film movements, etc. is required for digital compositing. In short, the technology of digital film printing is completely separate from the technology of digital film compositing. The technology of digital film scanning is required, perhaps, to get the spacecrafts and starfield into digital form, and that of digital film printing is required to write the composite of these elements out to film, but the composite itself is a computation, not a physico-chemical process. This argument holds regardless of input or output media. In fact, from hereon I will refer to film as my example, it being clear that the argument generalizes to other media. Matte Creation v Matte Usage The general distinction drawn here is between the technology of pulling mattes, or matte creation, and that of compositing, or matte usage. To perform a film composite of, say a spacecraft, over, say a starfield, one must know where on an output film frame to write the foreground spacecraft and where to write the background starfield—that is, where to expose the foreground element to the unexposed film frame and where to expose the background element. We will ignore for the moment, for the purpose of clarity, the problem of partial transparencies of the foreground object that allow the background object to show through partially. In classic film technology, predating the computer by decades ([Beyer64], [Fielding72], [Vlahos80]), the required spatial information is provided by a (traveling) matte, another piece of film that is transparent where the spacecraft, for example, exists in the frame and opaque elsewhere. This can be done with monochrome film. It is also easy to generate the complement of this matte, sometimes called the holdout matte, by simply exposing the matte film strip to an unexposed strip of monochrome film. So the holdout matte film strip is placed up against the background film strip, in frame by frame register, called a bipack configuration of film, and exposed to a strip of unexposed color film. The starfield, for example, gets exposed to this receiving strip where the holdout matte does not hold out—that is, where the holdout matte is transparent. Then the same strip of film is re-exposed to a bipack consisting of the matte and the foreground element. This time the spacecraft, for example, gets exposed exactly where the starfield was not exposed. Digital film compositing technology is, in its simplest implementation, the digital version of this process, where each strip of film is replaced with a digital equivalent, and the composite is done with a digital computation. Once the foreground and background elements are in digital form and the matte is in digital form, then digital film compositing is a computation, not a physico-chemical process. As we shall see, the computer has caused several fundamentally new Alpha and the History of Digital Compositing Microsoft Tech Memo 7 Alvy 3 ideas to be added to the compositor’s arsenal that are not simply simulations of known analog art. The question becomes: Where does the matte come from? There are several classic (pre-computer) answers to this question. One set of techniques (at least one of which, the sodium vapor technique, was invented by Petro Vlahos [Vlahos58]) causes the generation of the matte strip of film simultaneously with the foreground element strip of film. So this technique simultaneously generates two strips of film for each foreground element. Then optical techniques are used, as described above, to form the composite. Digital technology has nothing new to contribute here; it simply emulates the analog technique. Another technique called blue-screen matting provides the matte strip of film after the fact, so to speak. Blue-screen matting (or more generally, constant color matting, since blue is not required) was also invented by Petro Vlahos [Vlahos64]. It requires that a foreground element be filmed against a constant-color, often bright ultramarine blue, background. Then with a tricky set of optical and film techniques that don’t need to concern us here, a matte is generated that is transparent where the the foreground film strip is the special blue color and opaque elsewhere, or the complement of this. There are digital simulations of this technique that are complicated but involve nothing more than a digital computer to accomplish. The art of generating a matte when one is not provided is often called, in filmmaking circles, pulling a matte. It is an art, requiring experts to accomplish1. I will generalize this concept to all ways of producing a matte, and term it matte creation. The important point is that matte creation is a technology separate from that of compositing, which is a technology that assumes a matte already exists. In short, the technology of matte creation is completely separate from the technology of digital film compositing. Petro Vlahos has been awarded by the Academy of Motion Picture Arts and Sciences for his inventions of this technology, a lifetime achievement award in fact. The digital computer can be used to simulate what he has done and for relatively minor improvements. At Lucasfilm, my colleague Tom Porter and I implemented digital matte creation techniques and improved them, but do not consider this part of our compositing technology. It is part of our matte creation technology. It is time now to return to the discussion of transparency mentioned earlier. One of the hardest things to accomplish in matte creation technology is the representation of partial transparency in the matte. Transparencies are important for foreground elements such as glasses of water, windows, hair, halos, filmy clothes, motion blurred objects, etc. I will not go into the details of why this is difficult or how it is solved, because that is irrelevant to the arguments here. The important points are (1) partial transparency is fundamental to convincing com1 I have proved, in fact, in [Smith82b] that blue-screen matting is an underspecified problem in general and therefore requires a human in the loop. Alpha and the History of Digital Compositing Microsoft Tech Memo 7 Alvy 4 posites, and (2) representing transparencies in a matte is part matte creation technology, not the compositing technology, which just uses the result.", "title": "" }, { "docid": "45f895841ad08bd4473025385e57073a", "text": "Robust brain magnetic resonance (MR) segmentation algorithms are critical to analyze tissues and diagnose tumor and edema in a quantitative way. In this study, we present a new tissue segmentation algorithm that segments brain MR images into tumor, edema, white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The detection of the healthy tissues is performed simultaneously with the diseased tissues because examining the change caused by the spread of tumor and edema on healthy tissues is very important for treatment planning. We used T1, T2, and FLAIR MR images of 20 subjects suffering from glial tumor. We developed an algorithm for stripping the skull before the segmentation process. The segmentation is performed using self-organizing map (SOM) that is trained with unsupervised learning algorithm and fine-tuned with learning vector quantization (LVQ). Unlike other studies, we developed an algorithm for clustering the SOM instead of using an additional network. Input feature vector is constructed with the features obtained from stationary wavelet transform (SWT) coefficients. The results showed that average dice similarity indexes are 91% for WM, 87% for GM, 96% for CSF, 61% for tumor, and 77% for edema.", "title": "" }, { "docid": "90cbb02beb09695320d7ab72d709b70e", "text": "Domain adaptation learning aims to solve the classification problems of unlabeled target domain by using rich labeled samples in source domain, but there are three main problems: negative transfer, under adaptation and under fitting. Aiming at these problems, a domain adaptation network based on hypergraph regularized denoising autoencoder (DAHDA) is proposed in this paper. To better fit the data distribution, the network is built with denoising autoencoder which can extract more robust feature representation. In the last feature and classification layers, the marginal and conditional distribution matching terms between domains are obtained via maximum mean discrepancy measurement to solve the under adaptation problem. To avoid negative transfer, the hypergraph regularization term is introduced to explore the high-order relationships among data. The classification performance of the model can be improved by preserving the statistical property and geometric structure simultaneously. Experimental results of 16 cross-domain transfer tasks verify that DAHDA outperforms other state-of-the-art methods.", "title": "" }, { "docid": "1a7b0df571b07927141a2e61314054ae", "text": "We propose a new method of power control for interference limited wireless networks with Rayleigh fading of both the desired and interference signals. Our method explictly takes into account the statistical variation of both the received signal and interference power, and optimally allocates power subject to constraints on the probability of fading induced outage for each transmitter/receiver pair. We establish several results for this type of problem. For the case in which the only constraints are those on the outage probabilities, we give a fast iterative method for finding the optimal power allocation. We establish tight bounds that relate the outage probability caused by channel fading to the signal-to-interference margin calculated when the statistical variation of the signal and intereference powers are ignored. This allows us to show that well-known methods for allocating power, based on Perron-Frobenius eigenvalue theory, can be used to determine power allocations that are provably close to achieving optimal (i.e., minimal) outage probability. In the most general case, which includes bounds on powers and other constraints, we show that the power control problem can be posed as a geometric program, which is a special type of optimization problem that can be transformed to a nonlinear convex optimization by a change of variables, and therefore solved globally and efficiently by recently developed interior-point methods.", "title": "" }, { "docid": "4f98e0a0d11796abcf04a448701b0444", "text": "BACKGROUND\nThe Alzheimer's Disease Assessment Scale (ADAS) was designed as a rating scale for the severity of dysfunction in the cognitive and non-cognitive behaviours that are characteristic of persons with Alzheimer's disease. Its subscale, the ADAS-cog, is a cognitive testing instrument most widely used to measure the impact of the disease. However, the ADAS-cog takes more than 45 min to administer and requires a qualified clinical psychologist as the rater. A more comprehensive rating battery is therefore required. In the present study, we developed a computerized test battery named the Touch Panel-type Dementia Assessment Scale (TDAS), which was intended to substitute for the ADAS-Cog, and was specifically designed to rate cognitive dysfunction quickly and without the need of a specialist rater.\n\n\nMETHODS\nThe hardware for the TDAS comprises a 14-inch touch panel display and computer devices built into one case. The TDAS runs on Windows OS and was bundled with a custom program made with reference to the ADAS-cog. Participants in the present study were 34 patients with Alzheimer's disease. Each participant was administered the ADAS-cog and the TDAS. The test scores for each patient were compared to determine whether the severity of cognitive dysfunction of the patients could be rated equally as well by both tests.\n\n\nRESULTS\nPearson's correlation coefficient showed a significant correlation between the total scores (r= 0.69, P < 0.01) on the two scales for each patient. The Kendall coefficients of concordance obtained for the three corresponding pairs of tasks (word recognition, orientation, and naming object and fingers) showed the three TDAS tasks can rate symptoms of cognitive decline equally as well as the corresponding items on the ADAS-cog.\n\n\nCONCLUSIONS\nThe TDAS appears to be a sensitive and comprehensive assessment battery for rating the symptoms of Alzheimer's disease, and can be substituted for the ADAS-cog.", "title": "" }, { "docid": "be35c342291d4805d2a5333e31ee26d6", "text": "References • We study efficient exploration in reinforcement learning. • Most provably-efficient learning algorithms introduce optimism about poorly understood states and actions. • Motivated by potential advantages relative to optimistic algorithms, we study an alternative approach: posterior sampling for reinforcement learning (PSRL). • This is the extension of the Thompson sampling algorithm for multi-armed bandit problems to reinforcement learning. • We establish the first regret bounds for this algorithm.  Conceptually simple, separates algorithm from analysis: • PSRL selects policies according to the probability they are optimal without need for explicit construction of confidence sets. • UCRL2 bounds error in each s, a separately, which allows for worst-case mis-estimation to occur simultaneously in every s, a . • We believe this will make PSRL more statistically efficient.", "title": "" }, { "docid": "4350da9c0b2debf7ff9b117a9d9d3dbb", "text": "Purpose – The aim of this paper is to consider some of the issues in light of the application of Big Data in the domain of border security and immigration management. Investment in the technologies of borders and their securitisation continues to be a focal point for many governments across the globe. This paper is concerned with a particular example of such technologies, namely, “Big Data” analytics. In the past two years, the technology of Big Data has gained a remarkable popularity within a variety of sectors, ranging from business and government to scientific and research fields. While Big Data techniques are often extolled as the next frontier for innovation and productivity, they are also raising many ethical issues. Design/methodology/approach – The author draws on the example of the new Big Data solution recently developed by IBM for the Australian Customs and Border Protection Service. The system, which relies on data collected from Passenger Name Records, aims to facilitate and automate mechanisms of profiling enable the identification of “high-risk” travellers. It is argued that the use of such Big Data techniques risks augmenting the function and intensity of borders. Findings – The main concerns addressed here revolve around three key elements, namely, the problem of categorisation, the projective and predictive nature of Big Data techniques and their approach to the future and the implications of Big Data on understandings and practices of identity. Originality/value – By exploring these issues, the paper aims to contribute to the debates on the impact of information and communications technology-based surveillance in border management.", "title": "" }, { "docid": "c839542db0e80ce253a170a386d91bab", "text": "Description\nThe American College of Physicians (ACP) developed this guideline to present the evidence and provide clinical recommendations on the management of gout.\n\n\nMethods\nUsing the ACP grading system, the committee based these recommendations on a systematic review of randomized, controlled trials; systematic reviews; and large observational studies published between January 2010 and March 2016. Clinical outcomes evaluated included pain, joint swelling and tenderness, activities of daily living, patient global assessment, recurrence, intermediate outcomes of serum urate levels, and harms.\n\n\nTarget Audience and Patient Population\nThe target audience for this guideline includes all clinicians, and the target patient population includes adults with acute or recurrent gout.\n\n\nRecommendation 1\nACP recommends that clinicians choose corticosteroids, nonsteroidal anti-inflammatory drugs (NSAIDs), or colchicine to treat patients with acute gout. (Grade: strong recommendation, high-quality evidence).\n\n\nRecommendation 2\nACP recommends that clinicians use low-dose colchicine when using colchicine to treat acute gout. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 3\nACP recommends against initiating long-term urate-lowering therapy in most patients after a first gout attack or in patients with infrequent attacks. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 4\nACP recommends that clinicians discuss benefits, harms, costs, and individual preferences with patients before initiating urate-lowering therapy, including concomitant prophylaxis, in patients with recurrent gout attacks. (Grade: strong recommendation, moderate-quality evidence).", "title": "" }, { "docid": "4d6540d6a200689721063bb7a92b71c3", "text": "The recently-developed statistical method known as the \"bootstrap\" can be used to place confidence intervals on phylogenies. It involves resampling points from one's own data, with replacement, to create a series of bootstrap samples of the same size as the original data. Each of these is analyzed, and the variation among the resulting estimates taken to indicate the size of the error involved in making estimates from the original data. In the case of phylogenies, it is argued that the proper method of resampling is to keep all of the original species while sampling characters with replacement, under the assumption that the characters have been independently drawn by the systematist and have evolved independently. Majority-rule consensus trees can be used to construct a phylogeny showing all of the inferred monophyletic groups that occurred in a majority of the bootstrap samples. If a group shows up 95% of the time or more, the evidence for it is taken to be statistically significant. Existing computer programs can be used to analyze different bootstrap samples by using weights on the characters, the weight of a character being how many times it was drawn in bootstrap sampling. When all characters are perfectly compatible, as envisioned by Hennig, bootstrap sampling becomes unnecessary; the bootstrap method would show significant evidence for a group if it is defined by three or more characters.", "title": "" }, { "docid": "085db8b346c8d7875bccca5d4052192f", "text": "BACKGROUND\nTopical antipsoriatics are recommended first-line treatment of psoriasis, but rates of adherence are low. Patient support by use of electronic health (eHealth) services is suggested to improve medical adherence.\n\n\nOBJECTIVE\nTo review randomised controlled trials (RCTs) testing eHealth interventions designed to improve adherence to topical antipsoriatics and to review applications for smartphones (apps) incorporating the word psoriasis.\n\n\nMATERIAL AND METHODS\nLiterature review: Medline, Embase, Cochrane, PsycINFO and Web of Science were searched using search terms for eHealth, psoriasis and topical antipsoriatics. General analysis of apps: The operating systems (OS) for smartphones, iOS, Google Play, Microsoft Store, Symbian OS and Blackberry OS were searched for apps containing the word psoriasis.\n\n\nRESULTS\nLiterature review: Only one RCT was included, reporting on psoriasis patients' Internet reporting their status of psoriasis over a 12-month period. The rate of adherence was measured by Medication Event Monitoring System (MEMS®). An improvement in medical adherence and reduction of severity of psoriasis were reported. General analysis of apps: A total 184 apps contained the word psoriasis.\n\n\nCONCLUSION\nThere is a critical need for high-quality RCTs testing if the ubiquitous eHealth technologies, for example, some of the numerous apps, can improve psoriasis patients' rates of adherence to topical antipsoriatics.", "title": "" }, { "docid": "cfcc5b98ebebe08475d68667aacaf46f", "text": "Sequence alignment is an important task in bioinformatics which involves typical database search where data is in the form of DNA, RNA or protein sequence. For alignment various methods have been devised starting from pairwise alignment to multiple sequence alignment (MSA). To perform multiple sequence alignment various methods exists like progressive, iterative and concepts of dynamic programming in which we use Needleman Wunsch and Smith Waterman algorithms. This paper discusses various sequence alignment methods including their advantages and disadvantages. The alignment results of DNA sequence of chimpanzee and gorilla are shown.", "title": "" }, { "docid": "8c8120beecf9086f3567083f89e9dfa2", "text": "This thesis studies the problem of product name recognition from short product descriptions. This is an important problem especially with the increasing use of ERP (Enterprise Resource Planning) software at the core of modern business management systems, where the information of business transactions is stored in unstructured data stores. A solution to the problem of product name recognition is especially useful for the intermediate businesses as they are interested in finding potential matches between the items in product catalogs (produced by manufacturers or another intermediate business) and items in the product requests (given by the end user or another intermediate business). In this context the problem of product name recognition is specifically challenging because product descriptions are typically short, ungrammatical, incomplete, abbreviated and multilingual. In this thesis we investigate the application of supervised machine-learning techniques and gazetteer-based techniques to our problem. To approach the problem, we define it as a classification problem where the tokens of product descriptions are classified into I, O and B classes according to the standard IOB tagging scheme. Next we investigate and compare the performance of a set of hybrid solutions that combine machine learning and gazetteer-based approaches. We study a solution space that uses four learning models: linear and non-linear SVC, Random Forest, and AdaBoost. For each solution, we use the same set of features. We divide the features into four categories: token-level features, documentlevel features, gazetteer-based features and frequency-based features. Moreover, we use automatic feature selection to reduce the dimensionality of data; that consequently improves the training efficiency and avoids over-fitting. To be able to evaluate the solutions, we develop a machine learning framework that takes as its inputs a list of predefined solutions (i.e. our solution space) and a preprocessed labeled dataset (i.e. a feature vector X, and a corresponding class label vector Y). It automatically selects the optimal number of most relevant features, optimizes the hyper-parameters of the learning models, trains the learning models, and evaluates the solution set. We believe that our automated machine learning framework can effectively be used as an AutoML framework that automates most of the decisions that have to be made in the design process of a machine learning", "title": "" }, { "docid": "3c0b072b1b2c5082552aff2379bbeeee", "text": "Big Data is a recent research style which brings up challenges in decision making process. The size of the dataset turn intotremendously big, the process of extracting valuablefacts by analyzing these data also has become tedious. To solve this problem of information extraction with Big Data, parallel programming models can be used. Parallel Programming model achieves information extraction by partitioning the huge data into smaller chunks. MapReduce is one of the parallel programming models which works well with Hadoop Distributed File System(HDFS) that can be used to partition the data in a more efficient and effective way. In MapReduce, once the data is partitioned based on the <key, value> pair, it is ready for data analytics. Time Series data play an important role in Big Data Analytics where Time Series analysis can be performed with many machine learning algorithms as well as traditional algorithmic concepts such as regression, exponential smoothing, moving average, classification, clustering and model-based recommendation. For Big Data, these algorithms can be used with MapReduce programming model on Hadoop clusters by translating their data analytics logic to the MapReduce job which is to be run over Hadoop clusters. But Time Series data are sequential in nature so that the partitioning of Time Series data must be carefully done to retain its prediction accuracy.In this paper, a novel parallel approach to forecast Time Series data with Holt-Winters model (PAFHW) is proposed and the proposed approach PAFHW is enhanced by combining K-means clusteringfor forecasting the Time Series data in distributed environment.", "title": "" } ]
scidocsrr
87fa281fc1b05466979cc4b3577e5e96
From Shapeshifter to Lava Monster : Gender Stereotypes in Disney ’ s Moana
[ { "docid": "6f1d7e2faff928c80898bfbf05ac0669", "text": "This study examined level of engagement with Disney Princess media/products as it relates to gender-stereotypical behavior, body esteem (i.e. body image), and prosocial behavior during early childhood. Participants consisted of 198 children (Mage  = 58 months), who were tested at two time points (approximately 1 year apart). Data consisted of parent and teacher reports, and child observations in a toy preference task. Longitudinal results revealed that Disney Princess engagement was associated with more female gender-stereotypical behavior 1 year later, even after controlling for initial levels of gender-stereotypical behavior. Parental mediation strengthened associations between princess engagement and adherence to female gender-stereotypical behavior for both girls and boys, and for body esteem and prosocial behavior for boys only.", "title": "" } ]
[ { "docid": "2b169a32d20bb4af5527be41837f17f7", "text": "This paper introduces a two-switch flyback-forward pulse-width modulated (PWM) DC-DC converter along with the steady-state analysis, simplified design procedure, and experimental verification. The proposed converter topology is the result of integrating the secondary sides of the two-switch flyback and the two-switch forward converters in an anti-parallel connection, while retaining the two-main switches and the clamping diodes on a single winding primary side. The hybrid two-switch flyback-forward converter shares the semiconductor devices on the primary side and the magnetic component on the secondary side resulting in a low volume DC-DC converter with reduced switch voltage stress. Simulation and experimental results are given for a 10-V/30-W, 100 kHz laboratory prototype to verify the theoretical analysis.", "title": "" }, { "docid": "aae7c62819cb70e21914486ade94a762", "text": "From failure experience on power transformers very often it was suspected that inrush currents, occurring when energizing unloaded transformers, were the reason for damage. In this paper it was investigated how mechanical forces within the transformer coils build up under inrush compared to those occurring at short circuit. 2D and 3D computer modeling for a real 268 MVA, 525/17.75 kV three-legged step up transformer were employed. The results show that inrush current peaks of 70% of the rated short circuit current cause local forces in the same order of magnitude as those at short circuit. The resulting force summed up over the high voltage coil is even three times higher. Although inrush currents are normally smaller, the forces can have similar amplitudes as those at short circuit, with longer exposure time, however. Therefore, care has to be taken to avoid such high inrush currents. Today controlled switching offers an elegant and practical solution.", "title": "" }, { "docid": "0fcefddfe877b804095838eb9de9581d", "text": "This paper examines the torque ripple and cogging torque variation in surface-mounted permanent-magnet synchronous motors (PMSMs) with skewed rotor. The effect of slot/pole combinations and magnet shapes on the magnitude and harmonic content of torque waveforms in a PMSM drive has been studied. Finite element analysis and experimental results show that the skewing with steps does not necessarily reduce the torque ripple but may cause it to increase for certain magnet designs and configurations. The electromagnetic torque waveforms, including cogging torque, have been analyzed for four different PMSM configurations having the same envelop dimensions and output requirements.", "title": "" }, { "docid": "857e9430ebc5cf6aad2737a0ce10941e", "text": "Despite a long tradition of effectiveness in laboratory tests, normative messages have had mixed success in changing behavior in field contexts, with some studies showing boomerang effects. To test a theoretical account of this inconsistency, we conducted a field experiment in which normative messages were used to promote household energy conservation. As predicted, a descriptive normative message detailing average neighborhood usage produced either desirable energy savings or the undesirable boomerang effect, depending on whether households were already consuming at a low or high rate. Also as predicted, adding an injunctive message (conveying social approval or disapproval) eliminated the boomerang effect. The results offer an explanation for the mixed success of persuasive appeals based on social norms and suggest how such appeals should be properly crafted.", "title": "" }, { "docid": "95d1a35068e7de3293f8029e8b8694f9", "text": "Botnet is one of the major threats on the Internet for committing cybercrimes, such as DDoS attacks, stealing sensitive information, spreading spams, etc. It is a challenging issue to detect modern botnets that are continuously improving for evading detection. In this paper, we propose a machine learning based botnet detection system that is shown to be effective in identifying P2P botnets. Our approach extracts convolutional version of effective flow-based features, and trains a classification model by using a feed-forward artificial neural network. The experimental results show that the accuracy of detection using the convolutional features is better than the ones using the traditional features. It can achieve 94.7% of detection accuracy and 2.2% of false positive rate on the known P2P botnet datasets. Furthermore, our system provides an additional confidence testing for enhancing performance of botnet detection. It further classifies the network traffic of insufficient confidence in the neural network. The experiment shows that this stage can increase the detection accuracy up to 98.6% and decrease the false positive rate up to 0.5%.", "title": "" }, { "docid": "4cde522275c034a8025c75d144a74634", "text": "Novel sentence detection aims at identifying novel information from an incoming stream of sentences. Our research applies named entity recognition (NER) and part-of-speech (POS) tagging on sentence-level novelty detection and proposes a mixed method to utilize these two techniques. Furthermore, we discuss the performance when setting different history sentence sets. Experimental results of different approaches on TREC'04 Novelty Track show that our new combined method outperforms some other novelty detection methods in terms of precision and recall. The experimental observations of each approach are also discussed.", "title": "" }, { "docid": "d1525fdab295a16d5610210e80fb8104", "text": "The analysis of big data requires powerful, scalable, and accurate data analytics techniques that the traditional data mining and machine learning do not have as a whole. Therefore, new data analytics frameworks are needed to deal with the big data challenges such as volumes, velocity, veracity, variety of the data. Distributed data mining constitutes a promising approach for big data sets, as they are usually produced in distributed locations, and processing them on their local sites will reduce significantly the response times, communications, etc. In this paper, we propose to study the performance of a distributed clustering, called Dynamic Distributed Clustering (DDC). DDC has the ability to remotely generate clusters and then aggregate them using an efficient aggregation algorithm. The technique is developed for spatial datasets. We evaluated the DDC using two types of communications (synchronous and asynchronous), and tested using various load distributions. The experimental results show that the approach has super-linear speed-up, scales up very well, and can take advantage of the recent programming models, such as MapReduce model, as its results are not affected by the types of communications.", "title": "" }, { "docid": "1982db485fbef226a5a1b839fa9bf12e", "text": "The photopigment in the human eye that transduces light for circadian and neuroendocrine regulation, is unknown. The aim of this study was to establish an action spectrum for light-induced melatonin suppression that could help elucidate the ocular photoreceptor system for regulating the human pineal gland. Subjects (37 females, 35 males, mean age of 24.5 +/- 0.3 years) were healthy and had normal color vision. Full-field, monochromatic light exposures took place between 2:00 and 3:30 A.M. while subjects' pupils were dilated. Blood samples collected before and after light exposures were quantified for melatonin. Each subject was tested with at least seven different irradiances of one wavelength with a minimum of 1 week between each nighttime exposure. Nighttime melatonin suppression tests (n = 627) were completed with wavelengths from 420 to 600 nm. The data were fit to eight univariant, sigmoidal fluence-response curves (R(2) = 0.81-0.95). The action spectrum constructed from these data fit an opsin template (R(2) = 0.91), which identifies 446-477 nm as the most potent wavelength region providing circadian input for regulating melatonin secretion. The results suggest that, in humans, a single photopigment may be primarily responsible for melatonin suppression, and its peak absorbance appears to be distinct from that of rod and cone cell photopigments for vision. The data also suggest that this new photopigment is retinaldehyde based. These findings suggest that there is a novel opsin photopigment in the human eye that mediates circadian photoreception.", "title": "" }, { "docid": "a70d064af5e8c5842b8ca04abc3fb2d6", "text": "In the current scenario of cloud computing, heterogeneous resources are located in various geographical locations requiring security-aware resource management to handle security threats. However, existing techniques are unable to protect systems from security attacks. To provide a secure cloud service, a security-based resource management technique is required that manages cloud resources automatically and delivers secure cloud services. In this paper, we propose a self-protection approach in cloud resource management called SECURE, which offers self-protection against security attacks and ensures continued availability of services to authorized users. The performance of SECURE has been evaluated using SNORT. The experimental results demonstrate that SECURE performs effectively in terms of both the intrusion detection rate and false positive rate. Further, the impact of security on quality of service (QoS) has been analyzed.", "title": "" }, { "docid": "170e2b0f15d9485bb3c00026c6c384a8", "text": "Chatbots are a rapidly expanding application of dialogue systems with companies switching to bot services for customer support, and new applications for users interested in casual conversation. One style of casual conversation is argument; many people love nothing more than a good argument. Moreover, there are a number of existing corpora of argumentative dialogues, annotated for agreement and disagreement, stance, sarcasm and argument quality. This paper introduces Debbie, a novel arguing bot, that selects arguments from conversational corpora, and aims to use them appropriately in context. We present an initial working prototype of Debbie, with some preliminary evaluation and describe future work.", "title": "" }, { "docid": "8244bb1d75e550beb417049afb1ff9d5", "text": "Electronically available data on the Web is exploding at an ever increasing pace. Much of this data is unstructured, which makes searching hard and traditional database querying impossible. Many Web documents, however, contain an abundance of recognizable constants that together describe the essence of a document’s content. For these kinds of data-rich, multiple-record documents (e.g. advertisements, movie reviews, weather reports, travel information, sports summaries, financial statements, obituaries, and many others) we can apply a conceptual-modeling approach to extract and structure data automatically. The approach is based on an ontology—a conceptual model instance—that describes the data of interest, including relationships, lexical appearance, and context keywords. By parsing the ontology, we can automatically produce a database scheme and recognizers for constants and keywords, and then invoke routines to recognize and extract data from unstructured documents and structure it according to the generated database scheme. Experiments show that it is possible to achieve good recall and precision ratios for documents that are rich in recognizable constants and narrow in ontological breadth. Our approach is less labor-intensive than other approaches that manually or semiautomatically generate wrappers, and it is generally insensitive to changes in Web-page format.", "title": "" }, { "docid": "4ecb2bd91312598428745851cac90d64", "text": "In large parking area attached to shopping malls and so on, it is difficult to find a vacant parking space. In addition, searching for parking space during long time leads to drivers stress and wasteful energy loss. In order to solve these problems, the navigation system in parking area by using ZigBee networks is proposed in this paper. The ZigBee is expected to realize low power consumption wireless system with low cost. Moreover, the ZigBee can form ad-hoc network easily and more than 65000 nodes can connect at the same time. Therefore, it is suitable for usage in the large parking area. In proposed system, the shortest route to the vacant parking space is transmitted to the own vehicle by the ZigBee ad-hoc network. Thus, the efficient guide is provided to the drivers. To show the effectiveness of the proposed parking system, the average time for arrival in the parking area is evaluated, and the performance of the vehicles that equips the ZigBee terminals is compared with the ordinary vehicles that do not equip the ZigBee terminals.", "title": "" }, { "docid": "c998270736000da12e509103af2c70ec", "text": "Flash memory grew from a simple concept in the early 1980s to a technology that generated close to $23 billion in worldwide revenue in 2007, and this represents one of the many success stories in the semiconductor industry. This success was made possible by the continuous innovation of the industry along many different fronts. In this paper, the history, the basic science, and the successes of flash memories are briefly presented. Flash memories have followed the Moore’s Law scaling trend for which finer line widths, achieved by improved lithographic resolution, enable more memory bits to be produced for the same silicon area, reducing cost per bit. When looking toward the future, significant challenges exist to the continued scaling of flash memories. In this paper, I discuss possible areas that need development in order to overcome some of the size-scaling challenges. Innovations are expected to continue in the industry, and flash memories will continue to follow the historical trend in cost reduction of semiconductor memories through the rest of this decade.", "title": "" }, { "docid": "d1756aa5f0885157bdad130d96350cd3", "text": "In this paper, we describe the winning approach for the RecSys Challenge 2015. Our key points are (1) two-stage classification, (2) massive usage of categorical features, (3) strong classifiers built by gradient boosting and (4) threshold optimization based directly on the competition score. We describe our approach and discuss how it can be used to build scalable personalization systems.", "title": "" }, { "docid": "e9b036925d05faa55b55ec8711715296", "text": "Chest X-rays is one of the most commonly available and affordable radiological examinations in clinical practice. While detecting thoracic diseases on chest X-rays is still a challenging task for machine intelligence, due to 1) the highly varied appearance of lesion areas on X-rays from patients of different thoracic disease and 2) the shortage of accurate pixel-level annotations by radiologists for model training. Existing machine learning methods are unable to deal with the challenge that thoracic diseases usually happen in localized disease-specific areas. In this article, we propose a weakly supervised deep learning framework equipped with squeeze-and-excitation blocks, multi-map transfer and max-min pooling for classifying common thoracic diseases as well as localizing suspicious lesion regions on chest X-rays. The comprehensive experiments and discussions are performed on the ChestX-ray14 dataset. Both numerical and visual results have demonstrated the effectiveness of proposed model and its better performance against the state-of-the-art pipelines.", "title": "" }, { "docid": "940e7dc630b7dcbe097ade7abb2883a4", "text": "Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depthbased 3D proposal generation.", "title": "" }, { "docid": "fb7961117dae98e770e0fe84c33673b9", "text": "Named-Entity Recognition (NER) aims at identifying the fragments of a given text that mention a given entity of interest. This manuscript presents our Minimal named-Entity Recognizer (MER), designed with flexibility, autonomy and efficiency in mind. To annotate a given text, MER only requires a lexicon (text file) with the list of terms representing the entities of interest; and a GNU Bash shell grep and awk tools. MER was deployed in a cloud infrastructure using multiple Virtual Machines to work as an annotation server and participate in the Technical Interoperability and Performance of annotation Servers (TIPS) task of BioCreative V.5. Preliminary results show that our solution processed each document (text retrieval and annotation) in less than 3 seconds on average without using any type of cache. MER is publicly available in a GitHub repository (https://github.com/lasigeBioTM/MER) and through a RESTful Web service (http://labs.fc.ul.pt/mer/).", "title": "" }, { "docid": "26b0fd17e691a1a95e4c08aa53167b43", "text": "We propose Teacher-Student Curriculum Learning (TSCL), a framework for automatic curriculum learning, where the Student tries to learn a complex task and the Teacher automatically chooses subtasks from a given set for the Student to train on. We describe a family of Teacher algorithms that rely on the intuition that the Student should practice more those tasks on which it makes the fastest progress, i.e. where the slope of the learning curve is highest. In addition, the Teacher algorithms address the problem of forgetting by also choosing tasks where the Student’s performance is getting worse. We demonstrate that TSCL matches or surpasses the results of carefully hand-crafted curricula in two tasks: addition of decimal numbers with LSTM and navigation in Minecraft. Using our automatically generated curriculum enabled to solve a Minecraft maze that could not be solved at all when training directly on solving the maze, and the learning was an order of magnitude faster than uniform sampling of subtasks.", "title": "" }, { "docid": "428c480be4ae3d2043c9f5485087c4af", "text": "Current difference-expansion (DE) embedding techniques perform one layer embedding in a difference image. They do not turn to the next difference image for another layer embedding unless the current difference image has no expandable differences left. The obvious disadvantage of these techniques is that image quality may have been severely degraded even before the later layer embedding begins because the previous layer embedding has used up all expandable differences, including those with large magnitude. Based on integer Haar wavelet transform, we propose a new DE embedding algorithm, which utilizes the horizontal as well as vertical difference images for data hiding. We introduce a dynamical expandable difference search and selection mechanism. This mechanism gives even chances to small differences in two difference images and effectively avoids the situation that the largest differences in the first difference image are used up while there is almost no chance to embed in small differences of the second difference image. We also present an improved histogram-based difference selection and shifting scheme, which refines our algorithm and makes it resilient to different types of images. Compared with current algorithms, the proposed algorithm often has better embedding capacity versus image quality performance. The advantage of our algorithm is more obvious near the embedding rate of 0.5 bpp.", "title": "" } ]
scidocsrr
c6e5af0540d26129576a7e1e371d5528
Why Do Social Media Users Share Misinformation?
[ { "docid": "85d4675562eb87550c3aebf0017e7243", "text": "Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information. In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought. We introduce an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events. We describe a Web service that leverages this framework to track political memes in Twitter and help detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections. We present some cases of abusive behaviors uncovered by our service. Finally, we discuss promising preliminary results on the detection of suspicious memes via supervised learning based on features extracted from the topology of the diffusion networks, sentiment analysis, and crowdsourced annotations.", "title": "" }, { "docid": "9948738a487ed899ec50ac292e1f9c6d", "text": "A Web survey of 1,715 college students was conducted to examine Facebook Groups users' gratifications and the relationship between users' gratifications and their political and civic participation offline. A factor analysis revealed four primary needs for participating in groups within Facebook: socializing, entertainment, self-status seeking, and information. These gratifications vary depending on user demographics such as gender, hometown, and year in school. The analysis of the relationship between users' needs and civic and political participation indicated that, as predicted, informational uses were more correlated to civic and political action than to recreational uses.", "title": "" }, { "docid": "96bb4155000096c1cba6285ad82c9a4d", "text": "0747-5632/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.chb.2011.10.002 ⇑ Corresponding author. Tel.: +65 6790 6636; fax: + E-mail addresses: leecs@ntu.edu.sg (C.S. Lee), malo 1 Tel.: +65 67905772; fax: +65 6791 5214. Recent events indicate that sharing news in social media has become a phenomenon of increasing social, economic and political importance because individuals can now participate in news production and diffusion in large global virtual communities. Yet, knowledge about factors influencing news sharing in social media remains limited. Drawing from the uses and gratifications (U&G) and social cognitive theories (SCT), this study explored the influences of information seeking, socializing, entertainment, status seeking and prior social media sharing experience on news sharing intention. A survey was designed and administered to 203 students in a large local university. Results from structural equation modeling (SEM) analysis revealed that respondents who were driven by gratifications of information seeking, socializing, and status seeking were more likely to share news in social media platforms. Prior experience with social media was also a significant determinant of news sharing intention. Implications and directions for future work are discussed. 2011 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "28b493b0f30c6605ff0c22ccea5d2ace", "text": "A serious threat today is malicious executables. It is designed to damage computer system and some of them spread over network without the knowledge of the owner using the system. Two approaches have been derived for it i.e. Signature Based Detection and Heuristic Based Detection. These approaches performed well against known malicious programs but cannot catch the new malicious programs. Different researchers have proposed methods using data mining and machine learning for detecting new malicious programs. The method based on data mining and machine learning has shown good results compared to other approaches. This work presents a static malware detection system using data mining techniques such as Information Gain, Principal component analysis, and three classifiers: SVM, J48, and Naïve Bayes. For overcoming the lack of usual anti-virus products, we use methods of static analysis to extract valuable features of Windows PE file. We extract raw features of Windows executables which are PE header information, DLLs, and API functions inside each DLL of Windows PE file. Thereafter, Information Gain, calling frequencies of the raw features are calculated to select valuable subset features, and then Principal Component Analysis is used for dimensionality reduction of the selected features. By adopting the concepts of machine learning and data-mining, we construct a static malware detection system which has a detection rate of 99.6%.", "title": "" }, { "docid": "170f14fbf337186c8bd9f36390916d2e", "text": "In this paper, we draw upon two sets of theoretical resources to develop a comprehensive theory of sexual offender rehabilitation named the Good Lives Model-Comprehensive (GLM-C). The original Good Lives Model (GLM-O) forms the overarching values and principles guiding clinical practice in the GLM-C. In addition, the latest sexual offender theory (i.e., the Integrated Theory of Sexual Offending; ITSO) provides a clear etiological grounding for these principles. The result is a more substantial and improved rehabilitation model that is able to conceptually link latest etiological theory with clinical practice. Analysis of the GLM-C reveals that it also has the theoretical resources to secure currently used self-regulatory treatment practice within a meaningful structure. D 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "45494f14c2d9f284dd3ad3a5be49ca78", "text": "Developing segmentation techniques for overlapping cells has become a major hurdle for automated analysis of cervical cells. In this paper, an automated three-stage segmentation approach to segment the nucleus and cytoplasm of each overlapping cell is described. First, superpixel clustering is conducted to segment the image into small coherent clusters that are used to generate a refined superpixel map. The refined superpixel map is passed to an adaptive thresholding step to initially segment the image into cellular clumps and background. Second, a linear classifier with superpixel-based features is designed to finalize the separation between nuclei and cytoplasm. Finally, edge and region based cell segmentation are performed based on edge enhancement process, gradient thresholding, morphological operations, and region properties evaluation on all detected nuclei and cytoplasm pairs. The proposed framework has been evaluated using the ISBI 2014 challenge dataset. The dataset consists of 45 synthetic cell images, yielding 270 cells in total. Compared with the state-of-the-art approaches, our approach provides more accurate nuclei boundaries, as well as successfully segments most of overlapping cells.", "title": "" }, { "docid": "60718ad958d65eb60a520d516f1dd4ea", "text": "With the advent of the Internet, more and more public universities in Malaysia are putting in effort to introduce e-learning in their respective universities. Using a structured questionnaire derived from the literature, data was collected from 250 undergraduate students from a public university in Penang, Malaysia. Data was analyzed using AMOS version 16. The results of the structural equation model indicated that service quality (β = 0.20, p < 0.01), information quality (β = 0.37, p < 0.01) and system quality (β = 0.20, p < 0.01) were positively related to user satisfaction explaining a total of 45% variance. The second regression analysis was to examine the impact of user satisfaction on continuance intention. The results showed that satisfaction (β = 0.31, p < 0.01), system quality (β = 0.18, p < 0.01) and service quality (β = 0.30, p < 0.01) were positively related to continuance intention explaining 44% of the variance. Implications from these findings to e-learning system developers and implementers were further elaborated.", "title": "" }, { "docid": "e425bba0f3ab24c226ab8881f3fe0780", "text": "We present a new method for solving total variation (TV) minimization problems in image restoration. The main idea is to remove some of the singularity caused by the nondifferentiability of the quantity |∇u| in the definition of the TV-norm before we apply a linearization technique such as Newton’s method. This is accomplished by introducing an additional variable for the flux quantity appearing in the gradient of the objective function, which can be interpreted as the normal vector to the level sets of the image u. Our method can be viewed as a primal-dual method as proposed by Conn and Overton [A Primal-Dual Interior Point Method for Minimizing a Sum of Euclidean Norms, preprint, 1994] and Andersen [Ph.D. thesis, Odense University, Denmark, 1995] for the minimization of a sum of Euclidean norms. In addition to possessing local quadratic convergence, experimental results show that the new method seems to be globally convergent.", "title": "" }, { "docid": "9c2609adae64ec8d0b4e2cc987628c05", "text": "We propose a novel method capable of retrieving clips from untrimmed videos based on natural language queries. This cross-modal retrieval task plays a key role in visual-semantic understanding, and requires localizing clips in time and computing their similarity to the query sentence. Current methods generate sentence and video embeddings and then compare them using a late fusion approach, but this ignores the word order in queries and prevents more fine-grained comparisons. Motivated by the need for fine-grained multi-modal feature fusion, we propose a novel early fusion embedding approach that combines video and language information at the word level. Furthermore, we use the inverse task of dense video captioning as a side-task to improve the learned embedding. Our full model combines these components with an efficient proposal pipeline that performs accurate localization of potential video clips. We present a comprehensive experimental validation on two large-scale text-to-clip datasets (Charades-STA and DiDeMo) and attain state-ofthe-art retrieval results with our model.", "title": "" }, { "docid": "38f75a17a30c1d3c08dc316cb8a3e4ac", "text": "There are often problems when students enter a course with widely different experience levels with key course topics. If the material is covered too slowly, those with greater experience get bored and lose interest. If the material is covered too quickly, those with less experience get lost and feel incompetent. This problem with incoming students of our Computer Science Major led us to create CS 0.5: an introductory Computer Science course to target those CS majors who have little or no background with programming. Our goal is to provide these students with an engaging curriculum and prepare them to keep pace in future courses with those students who enter with a stronger background.\n Following the lead of Mark Guzdial's work on using media computation for non-majors at Georgia Tech, we use media computation as the tool to provide this engaging curriculum. We report here on our experience to date using the CS 0.5 approach with a media computation course.", "title": "" }, { "docid": "edfb50c784e6e7a89ce12d524f667398", "text": "Unconventional machining processes (communally named advanced or modern machining processes) are widely used by manufacturing industries. These advanced machining processes allow producing complex profiles and high quality-products. However, several process parameters should be optimized to achieve this end. In this paper, the optimization of process parameters of two conventional and four advanced machining processes is investigated: drilling process, grinding process, abrasive jet machining (AJM), abrasive water jet machining (AWJM), ultrasonic machining (USM), and water jet machining (WJM), respectively. This research employed two bio-inspired algorithms called the cuckoo optimization algorithm (COA) and the hoopoe heuristic (HH) to optimize the machining control parameters of these processes. The obtained results are compared with other optimization algorithms described and applied in the literature.", "title": "" }, { "docid": "959f2723ba18e71b2f4acd6108350dd3", "text": "The manufacturing, converting and ennobling processes of paper are truly large area and reel-to-reel processes. Here, we describe a project focusing on using the converting and ennobling processes of paper in order to introduce electronic functions onto the paper surface. As key active electronic materials we are using organic molecules and polymers. We develop sensor, communication and display devices on paper and the main application areas are packaging and paper display applications.", "title": "" }, { "docid": "4afbb5f877f3920dccdf60f6f4dfbf91", "text": "Handling degenerate rotation-only camera motion is a challenge for keyframe-based simultaneous localization and mapping with six degrees of freedom. Existing systems usually filter corresponding keyframe candidates, resulting in mapping starvation and tracking failure. We propose to employ these otherwise discarded keyframes to build up local panorama maps registered in the 3D map. Thus, the system is able to maintain tracking during rotational camera motions. Additionally, we seek to actively associate panoramic and 3D map data for improved 3D mapping through the triangulation of more new 3D map features. We demonstrate the efficacy of our approach in several evaluations that show how the combined system handles rotation only camera motion while creating larger and denser maps compared to a standard SLAM system.", "title": "" }, { "docid": "f6a24aa476ec27b86e549af6d30f22b6", "text": "Designing autonomous robotic systems able to manipulate deformable objects without human intervention constitutes a challenging area of research. The complexity of interactions between a robot manipulator and a deformable object originates from a wide range of deformation characteristics that have an impact on varying degrees of freedom. Such sophisticated interaction can only take place with the assistance of intelligent multisensory systems that combine vision data with force and tactile measurements. Hence, several issues must be considered at the robotic and sensory levels to develop genuine dexterous robotic manipulators for deformable objects. This chapter presents a thorough examination of the modern concepts developed by the robotic community related to deformable objects grasping and manipulation. Since the convention widely adopted in the literature is often to extend algorithms originally proposed for rigid objects, a comprehensive coverage on the new trends on rigid objects manipulation is initially proposed. State-of-the-art techniques on robotic interaction with deformable objects are then examined and discussed. The chapter proposes a critical evaluation of the manipulation algorithms, the instrumentation systems adopted and the examination of end-effector technologies, including dexterous robotic hands. The motivation for this review is to provide an extensive appreciation of state-of-the-art solutions to help researchers and developers determine the best possible options when designing autonomous robotic systems to interact with deformable objects. Typically in a robotic setup, when robot manipulators are programmed to perform their tasks, they must have a complete knowledge about the exact structure of the manipulated object (shape, surface texture, rigidity) and about its location in the environment (pose). For some of these tasks, the manipulator becomes in contact with the object. Hence, interaction forces and moments are developed and consequently these interaction forces and moments, as well as the position of the end-effector, must be controlled, which leads to the concept of “force controlled manipulation” (Natale, 2003). There are different control strategies used in 28", "title": "" }, { "docid": "9c74807f3c1a5b0928ade3f9e3c1229d", "text": "Current perception systems of intelligent vehicles not only make use of visual sensors, but also take advantage of depth sensors. Extrinsic calibration of these heterogeneous sensors is required for fusing information obtained separately by vision sensors and light detection and ranging (LIDARs). In this paper, an optimal extrinsic calibration algorithm between a binocular stereo vision system and a 2-D LIDAR is proposed. Most extrinsic calibration methods between cameras and a LIDAR proceed by calibrating separately each camera with the LIDAR. We show that by placing a common planar chessboard with different poses in front of the multisensor system, the extrinsic calibration problem is solved by a 3-D reconstruction of the chessboard and geometric constraints between the views from the stereovision system and the LIDAR. Furthermore, our method takes sensor noise into account that it provides optimal results under Mahalanobis distance constraints. To evaluate the performance of the algorithm, experiments based on both computer simulation and real datasets are presented and analyzed. The proposed approach is also compared with a popular camera/LIDAR calibration method to show the benefits of our method.", "title": "" }, { "docid": "cc1f6ab87bdf7edd4f6e2c024988a838", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Versions of published Taylor & Francis and Routledge Open articles and Taylor & Francis and Routledge Open Select articles posted to institutional or subject repositories or any other third-party website are without warranty from Taylor & Francis of any kind, either expressed or implied, including, but not limited to, warranties of merchantability, fitness for a particular purpose, or non-infringement. Any opinions and views expressed in this article are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor & Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "e3c3f3fb3dd432017bf92e0fe5f7c341", "text": "This study aimed to evaluate the accuracy of intraoral scanners in full-arch scans. A representative model with 14 prepared abutments was digitized using an industrial scanner (reference scanner) as well as four intraoral scanners (iTero, CEREC AC Bluecam, Lava C.O.S., and Zfx IntraScan). Datasets obtained from different scans were loaded into 3D evaluation software, superimposed, and compared for accuracy. One-way analysis of variance (ANOVA) was implemented to compute differences within groups (precision) as well as comparisons with the reference scan (trueness). A level of statistical significance of p < 0.05 was set. Mean trueness values ranged from 38 to 332.9 μm. Data analysis yielded statistically significant differences between CEREC AC Bluecam and other scanners as well as between Zfx IntraScan and Lava C.O.S. Mean precision values ranged from 37.9 to 99.1 μm. Statistically significant differences were found between CEREC AC Bluecam and Lava C.O.S., CEREC AC Bluecam and iTero, Zfx Intra Scan and Lava C.O.S., and Zfx Intra Scan and iTero (p < 0.05). Except for one intraoral scanner system, all tested systems showed a comparable level of accuracy for full-arch scans of prepared teeth. Further studies are needed to validate the accuracy of these scanners under clinical conditions. Despite excellent accuracy in single-unit scans having been demonstrated, little is known about the accuracy of intraoral scanners in simultaneous scans of multiple abutments. Although most of the tested scanners showed comparable values, the results suggest that the inaccuracies of the obtained datasets may contribute to inaccuracies in the final restorations.", "title": "" }, { "docid": "652536bf512c975b7cb61e60a3246829", "text": "OBJECTIVE\nInterventions to prevent type 2 diabetes should be directed toward individuals at increased risk for the disease. To identify such individuals without laboratory tests, we developed the Diabetes Risk Score.\n\n\nRESEARCH DESIGN AND METHODS\nA random population sample of 35- to 64-year-old men and women with no antidiabetic drug treatment at baseline were followed for 10 years. New cases of drug-treated type 2 diabetes were ascertained from the National Drug Registry. Multivariate logistic regression model coefficients were used to assign each variable category a score. The Diabetes Risk Score was composed as the sum of these individual scores. The validity of the score was tested in an independent population survey performed in 1992 with prospective follow-up for 5 years.\n\n\nRESULTS\nAge, BMI, waist circumference, history of antihypertensive drug treatment and high blood glucose, physical activity, and daily consumption of fruits, berries, or vegetables were selected as categorical variables. Complete baseline risk data were found in 4435 subjects with 182 incident cases of diabetes. The Diabetes Risk Score value varied from 0 to 20. To predict drug-treated diabetes, the score value >or=9 had sensitivity of 0.78 and 0.81, specificity of 0.77 and 0.76, and positive predictive value of 0.13 and 0.05 in the 1987 and 1992 cohorts, respectively.\n\n\nCONCLUSIONS\nThe Diabetes Risk Score is a simple, fast, inexpensive, noninvasive, and reliable tool to identify individuals at high risk for type 2 diabetes.", "title": "" }, { "docid": "f34e0d226da243a2752bb65c0174f0c9", "text": "We used echo state networks, a subclass of recurrent neural networks, to predict stock prices of the S&P 500. Our network outperformed a Kalman filter, predicting more of the higher frequency fluctuations in stock price. The Challenge of Time Series Prediction Learning from past history is a fudamentality ill-posed. A model may fit past data well but not perform well when presented with new inputs. With recurrent neural networks (RNNs), we leverage the modeling abilities of neural networks (NNs) for time series forecastings. Feedforward NNs have done well in classification tasks such as handwriting recognition, however in dynamical environments, we need techniques that account for history. In RNNs, signals passing through recurrent connections constitute an effective memory for the network, which can then use information in memory to better predict future time series values. Unfortunately, RNNs are difficult to train. Traditional techniques used with feedforward NNs such as backpropagation fail to yield acceptable performance. However, subsets of RNNs that are more amenable to training have been developed in the emerging field known as reservoir computing. In reservoir computing, the recurrent connections of the network are viewed as a fixed reservoir used to map inputs into a high dimensional, dynamical space–a similar idea to the support vector machine. With a sufficiently high dimensional space, a simple linear decode can be used to approximate any function varying with time. Two reservoir networks known as Echo State Networks (ESNs) and Liquid State Machines (LSMs) have met with success in modeling nonlinear dynamical systems [2, 4]. We focus on the former, ESN, in this project and use it to predict stock prices and compare its performance to a Kalman filter. In an ESN, only the output weights are trained (see Figure 1). Echo State Network Implementation The state vector, x(t), of the network is governed by x(t+ 1) = f ( W u(t) +Wx(t) +W y(t) ) , (1) where f(·) = tanh(·), W in describes the weights connecting the inputs to the network, u(t) is the input vector, W describes the recurrent weights, W fb describes the feedback weights connecting the outputs back to the network, and y(t) are the outputs. The output y(t) is governed by y(t) = W z(t), where z(t) = [x(t),u(t)] is the extended state. By including the input vector, the extended state allows the network to use a linear combination of the inputs in addition to the state to form the output. ESN creation follows the procedure outlined in [3]. Briefly, 1. Initialize network of N reservoir units with random W , W , and W .", "title": "" }, { "docid": "475fc34de30b8310a6eb2aba176f33fa", "text": "A novel compact broadband water dense patch antenna with relatively thick air layer is introduced. The distilled water with high permittivity is located on the top of the low-loss, low-permittivity supporting substrate to provide an electric wall boundary. The dense water patch antenna is excited with cavity mode, reducing the impact of dielectric loss of the water on the antenna efficiency. The designs of loading the distilled water and T-shaped shorting sheet are applied for size reduction. The wide bandwidth is attributed to the coupling L-shaped probe, proper size of the coupled T-shaped shorting sheet, and thick air layer. As a result, the dimensions of the water patch are only 0.146 λ0 × 0.078 λ0 × 0.056 λ0. The proposed antenna has a high radiation up to 70% over the lower frequency band of 4G mobile communication from 690 to 960 MHz. Good agreements are achieved between the measured results and the simulated results.", "title": "" }, { "docid": "29fcfb65f54678ed79d9712ed5755cb8", "text": "Recent studies show that the popularity of the pairs trading strategy has been growing and it may pose a problem as the opportunities to trade become much smaller. Therefore, the optimization of pairs trading strategy has gained widespread attention among high-frequency traders. In this paper, using reinforcement learning, we examine the optimum level of pairs trading specifications over time.More specifically, the reinforcement learning agent chooses the optimum level of parameters of pairs trading to maximize the objective function. Results are obtained by applying a combination of the reinforcement learning method and cointegration approach. We find that boosting pairs trading specifications by using the proposed approach significantly overperform the previous methods. Empirical results based on the comprehensive intraday data which are obtained from S&P500 constituent stocks confirm the efficiently of our proposed method. Communicated by V. Loia. B Hasan Hakimian hasan.hakimian@ut.ac.ir Saeid Fallahpour falahpor@ut.ac.ir Khalil Taheri k.taheri@ut.ac.ir Ehsan Ramezanifar e.ramezanifar@maastrichtuniversity.nl 1 Department of Finance, Faculty of Management, University of Tehran, Tehran, Iran 2 Advanced Robotics and Intelligent Systems Laboratory, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran 3 Department of Finance, School of Business and Economics, Maastricht, The Netherlands", "title": "" }, { "docid": "b6fc3332243aa421fbe812e5c4698dc9", "text": "BACKGROUND\nStatistical shape models are widely used in biomedical research. They are routinely implemented for automatic image segmentation or object identification in medical images. In these fields, however, the acquisition of the large training datasets, required to develop these models, is usually a time-consuming process. Even after this effort, the collections of datasets are often lost or mishandled resulting in replication of work.\n\n\nOBJECTIVE\nTo solve these problems, the Virtual Skeleton Database (VSD) is proposed as a centralized storage system where the data necessary to build statistical shape models can be stored and shared.\n\n\nMETHODS\nThe VSD provides an online repository system tailored to the needs of the medical research community. The processing of the most common image file types, a statistical shape model framework, and an ontology-based search provide the generic tools to store, exchange, and retrieve digital medical datasets. The hosted data are accessible to the community, and collaborative research catalyzes their productivity.\n\n\nRESULTS\nTo illustrate the need for an online repository for medical research, three exemplary projects of the VSD are presented: (1) an international collaboration to achieve improvement in cochlear surgery and implant optimization, (2) a population-based analysis of femoral fracture risk between genders, and (3) an online application developed for the evaluation and comparison of the segmentation of brain tumors.\n\n\nCONCLUSIONS\nThe VSD is a novel system for scientific collaboration for the medical image community with a data-centric concept and semantically driven search option for anatomical structures. The repository has been proven to be a useful tool for collaborative model building, as a resource for biomechanical population studies, or to enhance segmentation algorithms.", "title": "" }, { "docid": "ee9ca88d092538a399d192cf1b9e9df6", "text": "The new user problem in recommender systems is still challenging, and there is not yet a unique solution that can be applied in any domain or situation. In this paper we analyze viable solutions to the new user problem in collaborative filtering (CF) that are based on the exploitation of user personality information: (a) personality-based CF, which directly improves the recommendation prediction model by incorporating user personality information, (b) personality-based active learning, which utilizes personality information for identifying additional useful preference data in the target recommendation domain to be elicited from the user, and (c) personality-based cross-domain recommendation, which exploits personality information to better use user preference data from auxiliary domains which can be used to compensate the lack of user preference data in the target domain. We benchmark the effectiveness of these methods on large datasets that span several domains, namely movies, music and books. Our results show that personality-aware methods achieve performance improvements that range from 6 to 94 % for users completely new to the system, while increasing the novelty of the recommended items by 3–40 % with respect to the non-personalized popularity baseline. We also discuss the limitations of our approach and the situations in which the proposed methods can be better applied, hence providing guidelines for researchers and practitioners in the field.", "title": "" } ]
scidocsrr
b93873a378ef697f0aea212862afa464
Practical and Optimal LSH for Angular Distance
[ { "docid": "0c4ca5a63c7001e6275b05da7771a7a6", "text": "We present a new data structure for the c-approximate near neighbor problem (ANN) in the Euclidean space. For n points in R, our algorithm achieves Oc(n + d log n) query time and Oc(n + d log n) space, where ρ ≤ 0.73/c + O(1/c) + oc(1). This is the first improvement over the result by Andoni and Indyk (FOCS 2006) and the first data structure that bypasses a locality-sensitive hashing lower bound proved by O’Donnell, Wu and Zhou (ICS 2011). By known reductions we obtain a data structure for the Hamming space and l1 norm with ρ ≤ 0.73/c+O(1/c) + oc(1), which is the first improvement over the result of Indyk and Motwani (STOC 1998). Thesis Supervisor: Piotr Indyk Title: Professor of Electrical Engineering and Computer Science", "title": "" } ]
[ { "docid": "1f60109ccff855da33e8540b40f2d3d3", "text": "Nonnegative matrix factorization (NMF) is a widely-used method for multivariate analysis of nonnegative data, the goal of which is decompose a data matrix into a basis matrix and an encoding variable matrix with all of these matrices allowed to have only nonnegative elements. In this paper we present simple algorithms for orthogonal NMF, where orthogonality constraints are imposed on basis matrix or encoding matrix. We develop multiplicative updates directly from the true gradient (natural gradient) in Stiefel manifold, whereas existing algorithms consider additive orthogonality constraints. Numerical experiments on face image data for a image representation task show that our orthogonal NMF algorithm preserves the orthogonality, while the goodness-of-fit (GOF) is minimized. We also apply our orthogonal NMF to a clustering task, showing that it works better than the original NMF, which is confirmed by experiments on several UCI repository data sets.", "title": "" }, { "docid": "da8e929b1599b3241e75e4a1ead06207", "text": "The knowledge pyramid has been used for several years to illustrate the hierarchical relationships between data, information, knowledge, and wisdom. An earlier version of this paper presented a revised knowledge-KM pyramid that included processes such as filtering and sense making, reversed the pyramid by positing there was more knowledge than data, and showed knowledge management as an extraction of the pyramid. This paper expands the revised knowledge pyramid to include the Internet of Things and Big Data. The result is a revision of the data aspect of the knowledge pyramid. Previous thought was of data as reflections of reality as recorded by sensors. Big Data and the Internet of Things expand sensors and readings to create two layers of data. The top layer of data is the traditional transaction / operational data and the bottom layer of data is an expanded set of data reflecting massive data sets and sensors that are near mirrors of reality. The result is a knowledge pyramid that appears as an hourglass.", "title": "" }, { "docid": "4828e830d440cb7a2c0501952033da2f", "text": "This paper presents a current-mode control non-inverting buck-boost converter. The proposed circuit is controlled by the current mode and operated in three operation modes which are buck, buck-boost, and boost mode. The operation mode is automatically determined by the ratio between the input and output voltages. The proposed circuit is simulated by HSPICE with 0.5 um standard CMOS parameters. Its input voltage range is 2.5–5 V, and the output voltage range is 1.5–5 V. The maximum efficiency is 92% when it operates in buck mode.", "title": "" }, { "docid": "ad4b137253407e4323e288b65b03bd08", "text": "We formulate a document summarization method to extract passage-level answers for non-factoid queries, referred to as answer-biased summaries. We propose to use external information from related Community Question Answering (CQA) content to better identify answer bearing sentences. Three optimization-based methods are proposed: (i) query-biased, (ii) CQA-answer-biased, and (iii) expanded-query-biased, where expansion terms were derived from related CQA content. A learning-to-rank-based method is also proposed that incorporates a feature extracted from related CQA content. Our results show that even if a CQA answer does not contain a perfect answer to a query, their content can be exploited to improve the extraction of answer-biased summaries from other corpora. The quality of CQA content is found to impact on the accuracy of optimization-based summaries, though medium quality answers enable the system to achieve a comparable (and in some cases superior) accuracy to state-of-the-art techniques. The learning-to-rank-based summaries, on the other hand, are not significantly influenced by CQA quality. We provide a recommendation of the best use of our proposed approaches in regard to the availability of different quality levels of related CQA content. As a further investigation, the reliability of our approaches was tested on another publicly available dataset.", "title": "" }, { "docid": "5b76f50ef9745ef03205d3657e6fd3cd", "text": "In this paper we present preliminary results and future directions of work for a project in which we are building an RFID based system to sense and monitor free weight exercises.", "title": "" }, { "docid": "19477ceed88d44ea8b068a4826382f44", "text": "In the era of big data, the applications generating tremendous amount of data are becoming the main focus of attention as the wide increment of data generation and storage that has taken place in the last few years. This scenario is challenging for data mining techniques which are not arrogated to the new space and time requirements. In many of the real world applications, classification of imbalanced data-sets is the point of attraction. Most of the classification methods focused on two-class imbalanced problem. So, it is necessary to solve multi-class imbalanced problem, which exist in real-world domains. In the proposed work, we introduced a methodology for classification of multi-class imbalanced data. This methodology consists of two steps: In first step we used Binarization techniques (OVA and OVO) for decomposing original dataset into subsets of binary classes. In second step, the SMOTE algorithm is applied against each subset of imbalanced binary class in order to get balanced data. Finally, to achieve classification goal Random Forest (RF) classifier is used. Specifically, oversampling technique is adapted to big data using MapReduce so that this technique is able to handle as large data-set as needed. An experimental study is carried out to evaluate the performance of proposed method. For experimental analysis, we have used different datasets from UCI repository and the proposed system is implemented on Apache Hadoop and Apache Spark platform. The results obtained shows that proposed method outperforms over other methods.", "title": "" }, { "docid": "97fb823e7b74ac0bfcc99455d801e7ec", "text": "In the fifth generation (5G) of wireless communication systems, hitherto unprecedented requirements are expected to be satisfied. As one of the promising techniques of addressing these challenges, non-orthogonal multiple access (NOMA) has been actively investigated in recent years. In contrast to the family of conventional orthogonal multiple access (OMA) schemes, the key distinguishing feature of NOMA is to support a higher number of users than the number of orthogonal resource slots with the aid of non-orthogonal resource allocation. This may be realized by the sophisticated inter-user interference cancellation at the cost of an increased receiver complexity. In this paper, we provide a comprehensive survey of the original birth, the most recent development, and the future research directions of NOMA. Specifically, the basic principle of NOMA will be introduced at first, with the comparison between NOMA and OMA especially from the perspective of information theory. Then, the prominent NOMA schemes are discussed by dividing them into two categories, namely, power-domain and code-domain NOMA. Their design principles and key features will be discussed in detail, and a systematic comparison of these NOMA schemes will be summarized in terms of their spectral efficiency, system performance, receiver complexity, etc. Finally, we will highlight a range of challenging open problems that should be solved for NOMA, along with corresponding opportunities and future research trends to address these challenges.", "title": "" }, { "docid": "0fefdbc0dbe68391ccfc912be937f4fc", "text": "Privacy and security are essential requirements in practical biometric systems. In order to prevent the theft of biometric patterns, it is desired to modify them through revocable and non invertible transformations called Cancelable Biometrics. In this paper, we propose an efficient algorithm for generating a Cancelable Iris Biometric based on Sectored Random Projections. Our algorithm can generate a new pattern if the existing one is stolen, retain the original recognition performance and prevent extraction of useful information from the transformed patterns. Our method also addresses some of the drawbacks of existing techniques and is robust to degradations due to eyelids and eyelashes.", "title": "" }, { "docid": "f44bfa0a366fb50a571e6df9f4c3f91d", "text": "BACKGROUND\nIn silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process.\n\n\nRESULTS\ncamb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2).\n\n\nCONCLUSIONS\nOverall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package.", "title": "" }, { "docid": "175239ba9ba930efd0019182b2d2f2c8", "text": "Image Steganography is the computing field of hiding information from a source into a target image in a way that it becomes almost imperceptible from one’s eyes. Despite the high capacity of hiding information, the usual Least Significant Bit (LSB) techniques could be easily discovered. In order to hide information in more significant bits, the target image should be optimized. In this paper, it is proposed an optimization solution based on the Standard Particle Swarm Optimization 2011 (PSO), which has been compared with a previous Genetic Algorithm-based approach showing promising results. Specifically, it is shown an adaptation in the solution in order to keep the essence of PSO while remaining message hosted bits unchanged.", "title": "" }, { "docid": "8411019e166f3b193905099721c29945", "text": "In this article we recast the Dahl, LuGre, and Maxwell-slip models as extended, generalized, or semilinear Duhem models. We classified each model as either rate independent or rate dependent. Smoothness properties of the three friction models were also considered. We then studied the hysteresis induced by friction in a single-degree-of-freedom system. The resulting system was modeled as a linear system with Duhem feedback. For each friction model, we computed the corresponding hysteresis map. Next, we developed a DC servo motor testbed and performed motion experiments. We then modeled the testbed dynamics and simulated the system using all three friction models. By comparing the simulated and experimental results, it was found that the LuGre model provides the best model of the gearbox friction characteristics. A manual tuning approach was used to determine parameters that model the friction in the DC motor.", "title": "" }, { "docid": "db36273a3669e1aeda1bf2c5ab751387", "text": "Autonomous Ground Vehicles designed for dynamic environments require a reliable perception of the real world, in terms of obstacle presence, position and speed. In this paper we present a flexible technique to build, in real time, a dense voxel-based map from a 3D point cloud, able to: (1) discriminate between stationary and moving obstacles; (2) provide an approximation of the detected obstacle's absolute speed using the information of the vehicle's egomotion computed through a visual odometry approach. The point cloud is first sampled into a full 3D map based on voxels to preserve the tridimensional information; egomotion information allows computational efficiency in voxels creation; then voxels are processed using a flood fill approach to segment them into a clusters structure; finally, with the egomotion information, the obtained clusters are labeled as stationary or moving obstacles, and an estimation of their speed is provided. The algorithm runs in real time; it has been tested on one of VisLab's AGVs using a modified SGM-based stereo system as 3D data source.", "title": "" }, { "docid": "5eb9c6540de63be3e7c645286f263b4d", "text": "Inductive Power Transfer (IPT) is a practical method for recharging Electric Vehicles (EVs) because is it safe, efficient and convenient. Couplers or Power Pads are the power transmitters and receivers used with such contactless charging systems. Due to improvements in power electronic components, the performance and efficiency of an IPT system is largely determined by the coupling or flux linkage between these pads. Conventional couplers are based on circular pad designs and due to their geometry have fundamentally limited magnetic flux above the pad. This results in poor coupling at any realistic spacing between the ground pad and the vehicle pickup mounted on the chassis. Performance, when added to the high tolerance to misalignment required for a practical EV charging system, necessarily results in circular pads that are large, heavy and expensive. A new pad topology termed a flux pipe is proposed in this paper that overcomes difficulties associated with conventional circular pads. Due to the magnetic structure, the topology has a significantly improved flux path making more efficient and compact IPT charging systems possible.", "title": "" }, { "docid": "ed5a17f62e4024727538aba18f39fc78", "text": "The extent to which people can focus attention in the face of irrelevant distractions has been shown to critically depend on the level and type of information load involved in their current task. The ability to focus attention improves under task conditions of high perceptual load but deteriorates under conditions of high load on cognitive control processes such as working memory. I review recent research on the effects of load on visual awareness and brain activity, including changing effects over the life span, and I outline the consequences for distraction and inattention in daily life and in clinical populations.", "title": "" }, { "docid": "99cd180d0bb08e6360328b77219919c1", "text": "In this paper, we describe our approach to RecSys 2015 challenge problem. Given a dataset of item click sessions, the problem is to predict whether a session results in a purchase and which items are purchased if the answer is yes.\n We define a simpler analogous problem where given an item and its session, we try to predict the probability of purchase for the given item. For each session, the predictions result in a set of purchased items or often an empty set.\n We apply monthly time windows over the dataset. For each item in a session, we engineer features regarding the session, the item properties, and the time window. Then, a balanced random forest classifier is trained to perform predictions on the test set.\n The dataset is particularly challenging due to privacy-preserving definition of a session, the class imbalance problem, and the volume of data. We report our findings with respect to feature engineering, the choice of sampling schemes, and classifier ensembles. Experimental results together with benefits and shortcomings of the proposed approach are discussed. The solution is efficient and practical in commodity computers.", "title": "" }, { "docid": "c4094c8b273d6332f36b6f452886de6a", "text": "This paper presents original research on prevalence, user characteristics and effect profile of N,N-dimethyltryptamine (DMT), a potent hallucinogenic which acts primarily through the serotonergic system. Data were obtained from the Global Drug Survey (an anonymous online survey of people, many of whom have used drugs) conducted between November and December 2012 with 22,289 responses. Lifetime prevalence of DMT use was 8.9% (n=1980) and past year prevalence use was 5.0% (n=1123). We explored the effect profile of DMT in 472 participants who identified DMT as the last new drug they had tried for the first time and compared it with ratings provided by other respondents on psilocybin (magic mushrooms), LSD and ketamine. DMT was most often smoked and offered a strong, intense, short-lived psychedelic high with relatively few negative effects or \"come down\". It had a larger proportion of new users compared with the other substances (24%), suggesting its popularity may increase. Overall, DMT seems to have a very desirable effect profile indicating a high abuse liability that maybe offset by a low urge to use more.", "title": "" }, { "docid": "9dadd96558791417495a5e1afa031851", "text": "INTRODUCTION\nLittle information is available on malnutrition-related factors among school-aged children ≥5 years in Ethiopia. This study describes the prevalence of stunting and thinness and their related factors in Libo Kemkem and Fogera, Amhara Regional State and assesses differences between urban and rural areas.\n\n\nMETHODS\nIn this cross-sectional study, anthropometrics and individual and household characteristics data were collected from 886 children. Height-for-age z-score for stunting and body-mass-index-for-age z-score for thinness were computed. Dietary data were collected through a 24-hour recall. Bivariate and backward stepwise multivariable statistical methods were employed to assess malnutrition-associated factors in rural and urban communities.\n\n\nRESULTS\nThe prevalence of stunting among school-aged children was 42.7% in rural areas and 29.2% in urban areas, while the corresponding figures for thinness were 21.6% and 20.8%. Age differences were significant in both strata. In the rural setting, fever in the previous 2 weeks (OR: 1.62; 95% CI: 1.23-2.32), consumption of food from animal sources (OR: 0.51; 95% CI: 0.29-0.91) and consumption of the family's own cattle products (OR: 0.50; 95% CI: 0.27-0.93), among others factors were significantly associated with stunting, while in the urban setting, only age (OR: 4.62; 95% CI: 2.09-10.21) and years of schooling of the person in charge of food preparation were significant (OR: 0.88; 95% CI: 0.79-0.97). Thinness was statistically associated with number of children living in the house (OR: 1.28; 95% CI: 1.03-1.60) and family rice cultivation (OR: 0.64; 95% CI: 0.41-0.99) in the rural setting, and with consumption of food from animal sources (OR: 0.26; 95% CI: 0.10-0.67) and literacy of head of household (OR: 0.24; 95% CI: 0.09-0.65) in the urban setting.\n\n\nCONCLUSION\nThe prevalence of stunting was significantly higher in rural areas, whereas no significant differences were observed for thinness. Various factors were associated with one or both types of malnutrition, and varied by type of setting. To effectively tackle malnutrition, nutritional programs should be oriented to local needs.", "title": "" }, { "docid": "427028ef819df3851e37734e5d198424", "text": "The code that provides solutions to key software requirements, such as security and fault-tolerance, tends to be spread throughout (or cross-cut) the program modules that implement the “primary functionality” of a software system. Aspect-oriented programming is an emerging programming paradigm that supports implementing such cross-cutting requirements into named program units called “aspects”. To construct a system as an aspect-oriented program (AOP), one develops code for primary functionality in traditional modules and code for cross-cutting functionality in aspect modules. Compiling and running an AOP requires that the aspect code be “woven” into the code. Although aspect-oriented programming supports the separation of concerns into named program units, explicit and implicit dependencies of both aspects and traditional modules will result in systems with new testing challenges, which include new sources for program faults. This paper introduces a candidate fault model, along with associated testing criteria, for AOPs based on interactions that are unique to AOPs. The paper also identifies key issues relevant to the systematic testing of AOPs.", "title": "" }, { "docid": "e089c8d35bd77e1947d11207a7905617", "text": "Real-time monitoring of groups and their rich contexts will be a key building block for futuristic, group-aware mobile services. In this paper, we propose GruMon, a fast and accurate group monitoring system for dense and complex urban spaces. GruMon meets the performance criteria of precise group detection at low latencies by overcoming two critical challenges of practical urban spaces, namely (a) the high density of crowds, and (b) the imprecise location information available indoors. Using a host of novel features extracted from commodity smartphone sensors, GruMon can detect over 80% of the groups, with 97% precision, using 10 minutes latency windows, even in venues with limited or no location information. Moreover, in venues where location information is available, GruMon improves the detection latency by up to 20% using semantic information and additional sensors to complement traditional spatio-temporal clustering approaches. We evaluated GruMon on data collected from 258 shopping episodes from 154 real participants, in two large shopping complexes in Korea and Singapore. We also tested GruMon on a large-scale dataset from an international airport (containing ≈37K+ unlabelled location traces per day) and a live deployment at our university, and showed both GruMon's potential performance at scale and various scalability challenges for real-world dense environment deployments.", "title": "" }, { "docid": "ea525c15c1cbb4a4a716e897287fd770", "text": "This study explored student teachers’ cognitive presence and learning achievements by integrating the SOP Model in which self-study (S), online group discussion (O) and double-stage presentations (P) were implemented in the flipped classroom. The research was conducted at a university in Taiwan with 31 student teachers. Preand post-worksheets measuring knowledge of educational issues were administered before and after group discussion. Quantitative content analysis and behavior sequential analysis were used to evaluate cognitive presence, while a paired-samples t-test analyzed learning achievement. The results showed that the participants had the highest proportion of “Exploration,” the second largest rate of “Integration,” but rarely reached “Resolution.” The participants’ achievements were greatly enhanced using the SOP Model in terms of the scores of the preand post-worksheets. Moreover, the groups with a higher proportion of “Integration” (I) and “Resolution” (R) performed best in the post-worksheets and were also the most progressive groups. Both highand low-rated groups had significant correlations between the “I” and “R” phases, with “I”  “R” in the low-rated groups but “R”  “I” in the high-rated groups. The instructional design of the SOP Model can be a reference for future pedagogical implementations in the higher educational context.", "title": "" } ]
scidocsrr
203cbc65bfaa66d7bfeba057b434cbbf
Anomaly detection in online social networks
[ { "docid": "06860bf1ede8dfe83d3a1b01fe4df835", "text": "The Internet and computer networks are exposed to an increasing number of security threats. With new types of attacks appearing continually, developing flexible and adaptive security oriented approaches is a severe challenge. In this context, anomaly-based network intrusion detection techniques are a valuable technology to protect target systems and networks against malicious activities. However, despite the variety of such methods described in the literature in recent years, security tools incorporating anomaly detection functionalities are just starting to appear, and several important problems remain to be solved. This paper begins with a review of the most well-known anomaly-based intrusion detection techniques. Then, available platforms, systems under development and research projects in the area are presented. Finally, we outline the main challenges to be dealt with for the wide scale deployment of anomaly-based intrusion detectors, with special emphasis on assessment issues. a 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2c6332afec6a2c728041e0325a27fcbf", "text": "Today’s social networks are plagued by numerous types of malicious profiles which can range from socialbots to sexual predators. We present a novel method for the detection of these malicious profiles by using the social network’s own topological features only. Reliance on these features alone ensures that the proposed method is generic enough to be applied on a range of social networks. The algorithm has been evaluated on several social networks and was found to be effective in detecting various types of malicious profiles. We believe this method is a valuable step in the increasing battle against social network spammers, socialbots, and sexual predictors.", "title": "" } ]
[ { "docid": "968ee8726afb8cc82d629ac8afabf3db", "text": "Online communities are increasingly important to organizations and the general public, but there is little theoretically based research on what makes some online communities more successful than others. In this article, we apply theory from the field of social psychology to understand how online communities develop member attachment, an important dimension of community success. We implemented and empirically tested two sets of community features for building member attachment by strengthening either group identity or interpersonal bonds. To increase identity-based attachment, we gave members information about group activities and intergroup competition, and tools for group-level communication. To increase bond-based attachment, we gave members information about the activities of individual members and interpersonal similarity, and tools for interpersonal communication. Results from a six-month field experiment show that participants’ visit frequency and self-reported attachment increased in both conditions. Community features intended to foster identity-based attachment had stronger effects than features intended to foster bond-based attachment. Participants in the identity condition with access to group profiles and repeated exposure to their group’s activities visited their community twice as frequently as participants in other conditions. The new features also had stronger effects on newcomers than on old-timers. This research illustrates how theory from the social science literature can be applied to gain a more systematic understanding of online communities and how theory-inspired features can improve their success. 1", "title": "" }, { "docid": "a77336cc767ca49479d2704942fe3578", "text": "UNLABELLED\nA longitudinal field experiment was carried out over a period of 2 weeks to examine the influence of product aesthetics and inherent product usability. A 2 × 2 × 3 mixed design was used in the study, with product aesthetics (high/low) and usability (high/low) being manipulated as between-subjects variables and exposure time as a repeated-measures variable (three levels). A sample of 60 mobile phone users was tested during a multiple-session usability test. A range of outcome variables was measured, including performance, perceived usability, perceived aesthetics and emotion. A major finding was that the positive effect of an aesthetically appealing product on perceived usability, reported in many previous studies, began to wane with increasing exposure time. The data provided similar evidence for emotion, which also showed changes as a function of exposure time. The study has methodological implications for the future design of usability tests, notably suggesting the need for longitudinal approaches in usability research.\n\n\nPRACTITIONER SUMMARY\nThis study indicates that product aesthetics influences perceived usability considerably in one-off usability tests but this influence wanes over time. When completing a usability test it is therefore advisable to adopt a longitudinal multiple-session approach to reduce the possibly undesirable influence of aesthetics on usability ratings.", "title": "" }, { "docid": "7077a80ec214dd78ebc7aeedd621d014", "text": "Malicious URL, a.k.a. malicious website, is a common and serious threat to cybersecurity. Malicious URLs host unsolicited content (spam, phishing, drive-by exploits, etc.) and lure unsuspecting users to become victims of scams (monetary loss, theft of private information, and malware installation), and cause losses of billions of dollars every year. It is imperative to detect and act on such threats in a timely manner. Traditionally, this detection is done mostly through the usage of blacklists. However, blacklists cannot be exhaustive, and lack the ability to detect newly generated malicious URLs. To improve the generality of malicious URL detectors, machine learning techniques have been explored with increasing attention in recent years. This article aims to provide a comprehensive survey and a structural understanding of Malicious URL Detection techniques using machine learning. We present the formal formulation of Malicious URL Detection as a machine learning task, and categorize and review the contributions of literature studies that addresses different dimensions of this problem (feature representation, algorithm design, etc.). Further, this article provides a timely and comprehensive survey for a range of different audiences, not only for machine learning researchers and engineers in academia, but also for professionals and practitioners in cybersecurity industry, to help them understand the state of the art and facilitate their own research and practical applications. We also discuss practical issues in system design, open research challenges, and point out some important directions for future research.", "title": "" }, { "docid": "afa3aba4f7edfecd4e632f856c2b7c01", "text": "Ruminants make efficient use of diets that are poor in true protein content because microbes in the rumen are able to synthesize a large proportion of the animal’s required protein. The amino acid (AA) pattern of this protein is of better quality than nearly all of the dietary ingredients commonly fed to domestic ruminants (Broderick, 1994; Schwab, 1996). In addition, ruminal microbial utilization of ammonia allows the feeding of nonprotein N (NPN) compounds, such as urea, as well as the capture of recycled urea N that would otherwise be excreted in the urine. Many studies have shown that lactating dairy cows use feed crude protein (CP; N x 6.25) more efficiently than other ruminant livestock. However, dairy cows still excrete 2-3 times more N in manure than they secrete in milk, even under conditions of optimal nutrition and management. Inefficient N utilization necessitates feeding supplemental protein, increasing milk production costs and contributing to environmental N pollution. One of our major objectives in protein nutrition of lactating ruminants must be to maximize ruminal formation of this high quality microbial protein and minimize feeding of costly protein supplements under all feeding regimes.", "title": "" }, { "docid": "1a65b9d35bce45abeefe66882dcf4448", "text": "Data is nowadays an invaluable resource, indeed it guides all business decisions in most of the computer-aided human activities. Threats to data integrity are thus of paramount relevance, as tampering with data may maliciously affect crucial business decisions. This issue is especially true in cloud computing environments, where data owners cannot control fundamental data aspects, like the physical storage of data and the control of its accesses. Blockchain has recently emerged as a fascinating technology which, among others, provides compelling properties about data integrity. Using the blockchain to face data integrity threats seems to be a natural choice, but its current limitations of low throughput, high latency, and weak stability hinder the practical feasibility of any blockchain-based solutions. In this paper, by focusing on a case study from the European SUNFISH project, which concerns the design of a secure by-design cloud federation platform for the public sector, we precisely delineate the actual data integrity needs of cloud computing environments and the research questions to be tackled to adopt blockchain-based databases. First, we detail the open research questions and the difficulties inherent in addressing them. Then, we outline a preliminary design of an effective blockchain-based database for cloud computing environments.", "title": "" }, { "docid": "a24aef41aef5070575b4814e191f92cb", "text": "1 Parallel Evolution in Science As we survey the evolution of modern science, we find the remarkable phenomenon that similar general conceptions and viewpoints have evolved independently in the various branches of science, and to begin with these may be indicated as follows: in the past centuries, science tried to explain phenomena by reducing them to an interplay of elementary units which could be investigated independently of each other. In contemporary modern science, we find in all fields conceptions of what is rather vaguely termed ‘wholeness.’ It was the aim of classical physics eventually to resolve all natural phenomena into a play of elementary units, the characteristics of which remain unaltered whether they are investigated in isolation or in a complex. The expression of this conception is the ideal of the Laplacean spirit, which resolves the world into an aimless play of atoms, governed by the laws of nature. This conception was not changed but rather strengthened when deterministic laws were replaced by statistical laws in Boltzmann’s derivation of the second principle of thermodynamics. Physical laws appeared to be essentially ‘laws of disorder,’ a statistical result of unordered and fortuitous events. In contrast, the basic problems in modern physics are problems of organisation. Problems of this kind present themselves in atomic physics, in structural chemistry, in crystallography, and so forth. In microphysics, it becomes impossible to resolve phenomena into local events, as is shown by the Heisenberg relation and in quantum mechanics. Corresponding to the procedure in physics, the attempt has been made in biology to resolve the phenomena of life into parts and processes which could be investigated in isolation. This procedure is essentially the same in the various branches of biology. The organism is considered to be an aggregate of cells as elementary life-units, its activities are resolved into functions of isolated organs and finally physico-chemical processes, its behaviour into reflexes, the material substratum of heredity into genes, acting independently of each other, phylogenetic evolution into single fortuitous mutations, and so on. As opposed to the analytical, summative and machine [135]theoretical viewpoints, organismic conceptions1 have evolved in all branches of modern biology which assert the necessity of investigating not only parts but also relations of organisation resulting from a dynamic interaction and manifesting themselves by the difference in behaviour of parts in isolation and in the whole organism. The development in medicine follows a similar pattern.2 Virchow’s programme of ‘cellular pathology,’ claiming to resolve disease into functional disturbances of cells, is to be supplemented by the consideration of the organism-as-a-whole, as it appears clearly in such fields as theory of human constitutions, endocrinology, physical medicine and psychotherapy. Again we find the same trend in psychology. Classical association psychology tried to resolve mental phenomena into elementary units, sensations and the like, psychological atoms, as it were. Gestalt psychology has demonstrated the existence and primacy of psychological entities, which are not a simple summation of elementary units, and are governed by dynamical laws.", "title": "" }, { "docid": "3ca7b7b8e07eb5943d6ce2acf9a6fa82", "text": "Excessive heat generation and occurrence of partial discharge have been observed in end-turn stress grading (SG) system in form-wound machines under PWM voltage. In this paper, multi-winding stress grading (SG) system is proposed as a method to change resistance of SG per length. Although the maximum field at the edge of stator and CAT are in a trade-off relationship, analytical results suggest that we can suppress field and excessive heat generation at both stator and CAT edges by multi-winding of SG and setting the length of CAT appropriately. This is also experimentally confirmed by measuring potential distribution of model bar-coil and observing partial discharge and temperature rise.", "title": "" }, { "docid": "5fc6b0e151762560c8f09d0fe6983ca2", "text": "The increasing popularity of wearable devices that continuously capture video, and the prevalence of third-party applications that utilize these feeds have resulted in a new threat to privacy. In many situations, sensitive objects/regions are maliciously (or accidentally) captured in a video frame by third-party applications. However, current solutions do not allow users to specify and enforce fine grained access control over video feeds.\n In this paper, we describe MarkIt, a computer vision based privacy marker framework, that allows users to specify and enforce fine grained access control over video feeds. We present two example privacy marker systems -- PrivateEye and WaveOff. We conclude with a discussion of the computer vision, privacy and systems challenges in building a comprehensive system for fine grained access control over video feeds.", "title": "" }, { "docid": "5a4a75fbaef6e7760320502a583954bf", "text": "Policy decisions at the organizational, corporate, and governmental levels should be more heavily influenced by issues related to well-being-people's evaluations and feelings about their lives. Domestic policy currently focuses heavily on economic outcomes, although economic indicators omit, and even mislead about, much of what society values. We show that economic indicators have many shortcomings, and that measures of well-being point to important conclusions that are not apparent from economic indicators alone. For example, although economic output has risen steeply over the past decades, there has been no rise in life satisfaction during this period, and there has been a substantial increase in depression and distrust. We argue that economic indicators were extremely important in the early stages of economic development, when the fulfillment of basic needs was the main issue. As societies grow wealthy, however, differences in well-being are less frequently due to income, and are more frequently due to factors such as social relationships and enjoyment at work. Important noneconomic predictors of the average levels of well-being of societies include social capital, democratic governance, and human rights. In the workplace, noneconomic factors influence work satisfaction and profitability. It is therefore important that organizations, as well as nations, monitor the well-being of workers, and take steps to improve it. Assessing the well-being of individuals with mental disorders casts light on policy problems that do not emerge from economic indicators. Mental disorders cause widespread suffering, and their impact is growing, especially in relation to the influence of medical disorders, which is declining. Although many studies now show that the suffering due to mental disorders can be alleviated by treatment, a large proportion of persons with mental disorders go untreated. Thus, a policy imperative is to offer treatment to more people with mental disorders, and more assistance to their caregivers. Supportive, positive social relationships are necessary for well-being. There are data suggesting that well-being leads to good social relationships and does not merely follow from them. In addition, experimental evidence indicates that people suffer when they are ostracized from groups or have poor relationships in groups. The fact that strong social relationships are critical to well-being has many policy implications. For instance, corporations should carefully consider relocating employees because doing so can sever friendships and therefore be detrimental to well-being. Desirable outcomes, even economic ones, are often caused by well-being rather than the other way around. People high in well-being later earn higher incomes and perform better at work than people who report low well-being. Happy workers are better organizational citizens, meaning that they help other people at work in various ways. Furthermore, people high in well-being seem to have better social relationships than people low in well-being. For example, they are more likely to get married, stay married, and have rewarding marriages. Finally, well-being is related to health and longevity, although the pathways linking these variables are far from fully understood. Thus, well-being not only is valuable because it feels good, but also is valuable because it has beneficial consequences. This fact makes national and corporate monitoring of well-being imperative. In order to facilitate the use of well-being outcomes in shaping policy, we propose creating a national well-being index that systematically assesses key well-being variables for representative samples of the population. Variables measured should include positive and negative emotions, engagement, purpose and meaning, optimism and trust, and the broad construct of life satisfaction. A major problem with using current findings on well-being to guide policy is that they derive from diverse and incommensurable measures of different concepts, in a haphazard mix of respondents. Thus, current findings provide an interesting sample of policy-related findings, but are not strong enough to serve as the basis of policy. Periodic, systematic assessment of well-being will offer policymakers a much stronger set of findings to use in making policy decisions.", "title": "" }, { "docid": "50389f4ec27cf68af999ee33c3210edf", "text": "Rising water temperature associated with climate change is increasingly recognized as a potential stressor for aquatic organisms, particularly for tropical ectotherms that are predicted to have narrow thermal windows relative to temperate ectotherms. We used intermittent flow resting and swimming respirometry to test for effects of temperature increase on aerobic capacity and swim performance in the widespread African cichlid Pseudocrenilabrus multicolor victoriae, acclimated for a week to a range of temperatures (2°C increments) between 24 and 34°C. Standard metabolic rate (SMR) increased between 24 and 32°C, but fell sharply at 34°C, suggesting either an acclimatory reorganization of metabolism or metabolic rate depression. Maximum metabolic rate (MMR) was elevated at 28 and 30°C relative to 24°C. Aerobic scope (AS) increased between 24 and 28°C, then declined to a level comparable to 24°C, but increased dramatically 34°C, the latter driven by the drop in SMR in the warmest treatment. Critical swim speed (Ucrit) was highest at intermediate temperature treatments, and was positively related to AS between 24 and 32°C; however, at 34°C, the increase in AS did not correspond to an increase in Ucrit, suggesting a performance cost at the highest temperature.", "title": "" }, { "docid": "0d41fcc5ea57e42c87b4a3152d50f9d2", "text": "This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be “embedded” into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem.", "title": "" }, { "docid": "68dc61e0c6b33729f08cdd73e8e86096", "text": "Many important data analysis applications present with severely imbalanced datasets with respect to the target variable. A typical example is medical image analysis, where positive samples are scarce, while performance is commonly estimated against the correct detection of these positive examples. We approach this challenge by formulating the problem as anomaly detection with generative models. We train a generative model without supervision on the ‘negative’ (common) datapoints and use this model to estimate the likelihood of unseen data. A successful model allows us to detect the ‘positive’ case as low likelihood datapoints. In this position paper, we present the use of state-of-the-art deep generative models (GAN and VAE) for the estimation of a likelihood of the data. Our results show that on the one hand both GANs and VAEs are able to separate the ‘positive’ and ‘negative’ samples in the MNIST case. On the other hand, for the NLST case, neither GANs nor VAEs were able to capture the complexity of the data and discriminate anomalies at the level that this task requires. These results show that even though there are a number of successes presented in the literature for using generative models in similar applications, there remain further challenges for broad successful implementation.", "title": "" }, { "docid": "c08e33f44b8e27529385b1557906dc81", "text": "A key challenge in wireless cognitive radio networks is to maximize the total throughput also known as the sum rates of all the users while avoiding the interference of unlicensed band secondary users from overwhelming the licensed band primary users. We study the weighted sum rate maximization problem with both power budget and interference temperature constraints in a cognitive radio network. This problem is nonconvex and generally hard to solve. We propose a reformulation-relaxation technique that leverages nonnegative matrix theory to first obtain a relaxed problem with nonnegative matrix spectral radius constraints. A useful upper bound on the sum rates is then obtained by solving a convex optimization problem over a closed bounded convex set. It also enables the sum-rate optimality to be quantified analytically through the spectrum of specially-crafted nonnegative matrices. Furthermore, we obtain polynomial-time verifiable sufficient conditions that can identify polynomial-time solvable problem instances, which can be solved by a fixed-point algorithm. As a by-product, an interesting optimality equivalence between the nonconvex sum rate problem and the convex max-min rate problem is established. In the general case, we propose a global optimization algorithm by utilizing our convex relaxation and branch-and-bound to compute an ε-optimal solution. Our technique exploits the nonnegativity of the physical quantities, e.g., channel parameters, powers and rates, that enables key tools in nonnegative matrix theory such as the (linear and nonlinear) Perron-Frobenius theorem, quasi-invertibility, Friedland-Karlin inequalities to be employed naturally. Numerical results are presented to show that our proposed algorithms are theoretically sound and have relatively fast convergence time even for large-scale problems", "title": "" }, { "docid": "a9dbb873487081afcc2a24dd7cb74bfe", "text": "We presented the first single block collision attack on MD5 with complexity of 2 MD5 compressions and posted the challenge for another completely new one in 2010. Last year, Stevens presented a single block collision attack to our challenge, with complexity of 2 MD5 compressions. We really appreciate Stevens’s hard work. However, it is a pity that he had not found even a better solution than our original one, let alone a completely new one and the very optimal solution that we preserved and have been hoping that someone can find it, whose collision complexity is about 2 MD5 compressions. In this paper, we propose a method how to choose the optimal input difference for generating MD5 collision pairs. First, we divide the sufficient conditions into two classes: strong conditions and weak conditions, by the degree of difficulty for condition satisfaction. Second, we prove that there exist strong conditions in only 24 steps (one and a half rounds) under specific conditions, by utilizing the weaknesses of compression functions of MD5, which are difference inheriting and message expanding. Third, there should be no difference scaling after state word q25 so that it can result in the least number of strong conditions in each differential path, in such a way we deduce the distribution of strong conditions for each input difference pattern. Finally, we choose the input difference with the least number of strong conditions and the most number of free message words. We implement the most efficient 2-block MD5 collision attack, which needs only about 2 MD5 compressions to find a collision pair, and show a single-block collision attack with complexity 2.", "title": "" }, { "docid": "efc341c0a3deb6604708b6db361bfba5", "text": "In recent years, data analysis has become important with increasing data volume. Clustering, which groups objects according to their similarity, has an important role in data analysis. DBSCAN is one of the most effective and popular density-based clustering algorithm and has been successfully implemented in many areas. However, it is a challenging task to determine the input parameter values of DBSCAN algorithm which are neighborhood radius Eps and minimum number of points MinPts. The values of these parameters significantly affect clustering performance of the algorithm. In this study, we propose AE-DBSCAN algorithm which includes a new method to determine the value of neighborhood radius Eps automatically. The experimental evaluations showed that the proposed method outperformed the classical method.", "title": "" }, { "docid": "821cf807af74612ae3377a7651752ff9", "text": "This paper proposes the contactless measurement scheme using LIDAR (Light Detection And Ranging) and the modeling human body movment for the personal mobility interface including twisting motion. We have already proposed the saddle type human body motion interface. This interface uses not only conventional translational human motions but also twisting motion, namely it makes full use of the human motion characteristics. The mechanism of the interface consists of the saddle and universal joint connecting the saddle and personal mobility, and tracing loins motion. Due to these features, the proposed interface shows a potential to realize intuitive operation in the basic experiment. However, the problems have also remained: The height of the saddle should be adjuested for the users' height before riding on the PMV (Personal Mobility Vehicle). And there are plays between the saddle and buttocks of the user, and backlash of the saddle mechanism. This problem prevents a small human motion from measurment. This paper, therefore, proposes the contactless measurement using LIDAR and discusses the fitting methods from measured data points to human body movement.", "title": "" }, { "docid": "0f5511aaed3d6627671a5e9f68df422a", "text": "As people document more of their lives online, some recent systems are encouraging people to later revisit those recordings, a practice we're calling technology-mediated reflection (TMR). Since we know that unmediated reflection benefits psychological well-being, we explored whether and how TMR affects well-being. We built Echo, a smartphone application for recording everyday experiences and reflecting on them later. We conducted three system deployments with 44 users who generated over 12,000 recordings and reflections. We found that TMR improves well-being as assessed by four psychological metrics. By analyzing the content of these entries we discovered two mechanisms that explain this improvement. We also report benefits of very long-term TMR.", "title": "" }, { "docid": "ae0d8d1dec27539502cd7e3030a3fe42", "text": "Thee KL divergence is the most commonly used measure for comparing query and document language models in the language modeling framework to ad hoc retrieval. Since KL is rank equivalent to a specific weighted geometric mean, we examine alternative weighted means for language-model comparison, as well as alternative divergence measures. The study includes analysis of the inverse document frequency (IDF) effect of the language-model comparison methods. Empirical evaluation, performed with different types of queries (short and verbose) and query-model induction approaches, shows that there are methods that often outperform the KL divergence in some settings.", "title": "" }, { "docid": "fa818e3e2e88ef83e592cab1d5a1a1eb", "text": "This paper presents a literature review on the use of depth for hand tracking and gesture recognition. The survey examines 37 papers describing depth-based gesture recognition systems in terms of (1) the hand localization and gesture classification methods developed and used, (2) the applications where gesture recognition has been tested, and (3) the effects of the low-cost Kinect and OpenNI software libraries on gesture recognition research. The survey is organized around a novel model of the hand gesture recognition process. In the reviewed literature, 13 methods were found for hand localization and 11 were found for gesture classification. 24 of the papers included real-world applications to test a gesture recognition system, but only 8 application categories were found (and three applications accounted for 18 of the papers). The papers that use the Kinect and the OpenNI libraries for hand tracking tend to focus more on applications than on localization and classification methods, and show that the OpenNI hand tracking method is good enough for the applications tested thus far. However, the limitations of the Kinect and other depth sensors for gesture recognition have yet to be tested in challenging applications and environments.", "title": "" }, { "docid": "63cef4e93184c865e0d42970ca9de9db", "text": "Numerous applications such as stock market or medical information systems require that both historical and current data be logically integrated into a temporal database. The underlying access method must support different forms of “time-travel” queries, the migration of old record versions onto inexpensive archive media, and high insertion and update rates. This paper presents an access method for transaction-time temporal data, called the log-structured history data access method (LHAM) that meets these demands. The basic principle of LHAM is to partition the data into successive components based on the timestamps of the record versions. Components are assigned to different levels of a storage hierarchy, and incoming data is continuously migrated through the hierarchy. The paper discusses the LHAM concepts, including concurrency control and recovery, our full-fledged LHAM implementation, and experimental performance results based on this implementation. A detailed comparison with the TSB-tree, both analytically and based on experiments with real implementations, shows that LHAM is highly superior in terms of insert performance, while query performance is in almost all cases at least as good as for the TSB-tree; in many cases it is much better.", "title": "" } ]
scidocsrr
da3e252c020f74854e53c73a3100bb1d
Experiments with Computational Creativity
[ { "docid": "462e3be75902bf8a39104c75ec2bea53", "text": "A new model for associative memory, based on a correlation matrix, is suggested. In this model information is accumulated on memory elements as products of component data. Denoting a key vector by q(p), and the data associated with it by another vector x(p), the pairs (q(p), x(p)) are memorized in the form of a matrix {see the Equation in PDF File} where c is a constant. A randomly selected subset of the elements of Mxq can also be used for memorizing. The recalling of a particular datum x(r) is made by a transformation x(r)=Mxqq(r). This model is failure tolerant and facilitates associative search of information; these are properties that are usually assigned to holographic memories. Two classes of memories are discussed: a complete correlation matrix memory (CCMM), and randomly organized incomplete correlation matrix memories (ICMM). The data recalled from the latter are stochastic variables but the fidelity of recall is shown to have a deterministic limit if the number of memory elements grows without limits. A special case of correlation matrix memories is the auto-associative memory in which any part of the memorized information can be used as a key. The memories are selective with respect to accumulated data. The ICMM exhibits adaptive improvement under certain circumstances. It is also suggested that correlation matrix memories could be applied for the classification of data.", "title": "" }, { "docid": "751231430c54bf33649e4c4e14d45851", "text": "The current state of A. D. Baddeley and G. J. Hitch's (1974) multicomponent working memory model is reviewed. The phonological and visuospatial subsystems have been extensively investigated, leading both to challenges over interpretation of individual phenomena and to more detailed attempts to model the processes underlying the subsystems. Analysis of the controlling central executive has proved more challenging, leading to a proposed clarification in which the executive is assumed to be a limited capacity attentional system, aided by a newly postulated fourth system, the episodic buffer. Current interest focuses most strongly on the link between working memory and long-term memory and on the processes allowing the integration of information from the component subsystems. The model has proved valuable in accounting for data from a wide range of participant groups under a rich array of task conditions. Working memory does still appear to be working.", "title": "" } ]
[ { "docid": "72871db63ff645a1691044bac42c56d3", "text": "Malware has become one of the most serious threats to computer information system and the current malware detection technology still has very significant limitations. In this paper, we proposed a malware detection approach by mining format information of PE (portable executable) files. Based on in-depth analysis of the static format information of the PE files, we extracted 197 features from format information of PE files and applied feature selection methods to reduce the dimensionality of the features and achieve acceptable high performance. When the selected features were trained using classification algorithms, the results of our experiments indicate that the accuracy of the top classification algorithm is 99.1% and the value of the AUC is 0.998. We designed three experiments to evaluate the performance of our detection scheme and the ability of detecting unknown and new malware. Although the experimental results of identifying new malware are not perfect, our method is still able to identify 97.6% of new malware with 1.3% false positive rates.", "title": "" }, { "docid": "a95328b8210e8c6fcd628cb48618ebee", "text": "Separation of video clips into foreground and background components is a useful and important technique, making recognition, classification, and scene analysis more efficient. In this paper, we propose a motion-assisted matrix restoration (MAMR) model for foreground-background separation in video clips. In the proposed MAMR model, the backgrounds across frames are modeled by a low-rank matrix, while the foreground objects are modeled by a sparse matrix. To facilitate efficient foreground-background separation, a dense motion field is estimated for each frame, and mapped into a weighting matrix which indicates the likelihood that each pixel belongs to the background. Anchor frames are selected in the dense motion estimation to overcome the difficulty of detecting slowly moving objects and camouflages. In addition, we extend our model to a robust MAMR model against noise for practical applications. Evaluations on challenging datasets demonstrate that our method outperforms many other state-of-the-art methods, and is versatile for a wide range of surveillance videos.", "title": "" }, { "docid": "e50a77b38d81d094c678dadf5c408c20", "text": "The calibration method of the soft iron and hard iron distortion based on attitude and heading reference system (AHRS) can boil down to the estimation of 12 parameters of magnetic deviation, normally using 12-state Kalman filter (KF) algorithm. The performance of compensation is limited by the accuracy of local inclination angle of magnetic field and initial heading. A 14-state extended Kalman filter (EKF) algorithm is developed to calibrate magnetic deviation, local magnetic inclination angle error and initial heading error all together. The calibration procedure is to change the attitude of AHRS and rotate it two cycles. As the strapdown matrix can hold high precision after initial alignment of AHRS in short time for the gyropsilas short-term precision, the magnetic field vector can be projected onto the body frame of AHRS. The experiment results demonstrate that 14-state EKF outperforms 12-state KF, with measurement errors exist in the initial heading and local inclination angle. The heading accuracy (variance) after compensation is 0.4 degree for tilt angle ranging between 0 and 60 degree.", "title": "" }, { "docid": "4031f4141333b9c0b95c175e22885ccc", "text": "Android OS experiences a blazing popularity since the last few years. This predominant platform has established itself not only in the mobile world but also in the Internet of Things (IoT) devices. This popularity, however, comes at the expense of security, as it has become a tempting target of malicious apps. Hence, there is an increasing need for sophisticated, automatic, and portable malware detection solutions. In this paper, we propose MalDozer, an automatic Android malware detection and family attribution framework that relies on sequences classification using deep learning techniques. Starting from the raw sequence of the app’s API method calls, MalDozer automatically extracts and learns the malicious and the benign patterns from the actual samples to detect Android malware. MalDozer can serve as a ubiquitous malware detection system that is not only deployed on servers, but also on mobile and even IoT devices. We evaluate MalDozer on multiple Android malware datasets ranging from 1K to 33K malware apps, and 38K benign apps. The results show that MalDozer can correctly detect malware and attribute them to their actual families with an F1-Score of 96%− 99% and a false positive rate of 0.06%− 2%, under all tested datasets and settings.", "title": "" }, { "docid": "c2195ae053d1bbf712c96a442a911e31", "text": "This paper introduces a new method to solve the cross-domain recognition problem. Different from the traditional domain adaption methods which rely on a global domain shift for all classes between the source and target domains, the proposed method is more flexible to capture individual class variations across domains. By adopting a natural and widely used assumption that the data samples from the same class should lay on an intrinsic low-dimensional subspace, even if they come from different domains, the proposed method circumvents the limitation of the global domain shift, and solves the cross-domain recognition by finding the joint subspaces of the source and target domains. Specifically, given labeled samples in the source domain, we construct a subspace for each of the classes. Then we construct subspaces in the target domain, called anchor subspaces, by collecting unlabeled samples that are close to each other and are highly likely to belong to the same class. The corresponding class label is then assigned by minimizing a cost function which reflects the overlap and topological structure consistency between subspaces across the source and target domains, and within the anchor subspaces, respectively. We further combine the anchor subspaces to the corresponding source subspaces to construct the joint subspaces. Subsequently, one-versus-rest support vector machine classifiers are trained using the data samples belonging to the same joint subspaces and applied to unlabeled data in the target domain. We evaluate the proposed method on two widely used datasets: 1) object recognition dataset for computer vision tasks and 2) sentiment classification dataset for natural language processing tasks. Comparison results demonstrate that the proposed method outperforms the comparison methods on both datasets.", "title": "" }, { "docid": "def650b2d565f88a6404997e9e93d34f", "text": "Quality uncertainty and high search costs for identifying relevant information from an ocean of information may prevent customers from making purchases. Recognizing potential negative impacts of this search cost for quality information and relevant information, firms began to invest in creating a virtual community that enables consumers to share their opinions and experiences to reduce quality uncertainty, and in developing recommendation systems that help customers identify goods in which they might have an interest. However, not much is known regarding the effectiveness of these efforts. In this paper, we empirically investigate the impacts of recommendations and consumer feedbacks on sales based on data gathered from Amazon.com. Our results indicate that more recommendations indeed improve sales at Amazon.com; however, consumer ratings are not found to be related to sales. On the other hand, number of consumer reviews is positively associated with sales. We also find that recommendations work better for less-popular books than for more-popular books. This is consistent with the search cost argument: a consumer’s search cost for less-popular books may be higher, and thus they may rely more on recommendations to locate a product of interest.", "title": "" }, { "docid": "f997ce9614ef19f3f44c1a6476d777fb", "text": "We report on two brain-damaged patients who show contrasting patterns of deficits in memory and language functioning. One patient (AW) suffers from a lexical retrieval deficit and failed to produce many irregularly inflected words such as spun, forgotten, andmice, but demonstrated intact production of regularly inflected words such aswalked andrats. She also had preserved declarative memory for facts and events. The other patient (VP) presented with a severe declarative memory deficit but showed no signs of impairment in producing either regular or irregular inflections. These patterns of deficits reveal that the retrieval of irregular inflections proceeds relatively autonomously with respect to declarative memory. We interpret these deficits with reference to three current theories of lexical structure: (a) Pinker's words and rules account, which assumes distinct mechanisms for processing regular and irregular inflections and proposes that lexical and semantic processing are subserved by distinct but interacting cognitive systems; (b) Ullman's declarative/procedural model, which assumes that mechanisms for the retrieval of irregular inflections are part of declarative memory; (c) Joanisse and Seidenberg's connectionist model, in which semantic information is critical for the retrieval of irregular inflections.", "title": "" }, { "docid": "fc66ced7b3faad64621722ab30cd5cc9", "text": "In this paper, we present a novel framework for urban automated driving based 1 on multi-modal sensors; LiDAR and Camera. Environment perception through 2 sensors fusion is key to successful deployment of automated driving systems, 3 especially in complex urban areas. Our hypothesis is that a well designed deep 4 neural network is able to end-to-end learn a driving policy that fuses LiDAR and 5 Camera sensory input, achieving the best out of both. In order to improve the 6 generalization and robustness of the learned policy, semantic segmentation on 7 camera is applied, in addition to applying our new LiDAR post processing method; 8 Polar Grid Mapping (PGM). The system is evaluated on the recently released urban 9 car simulator, CARLA. The evaluation is measured according to the generalization 10 performance from one environment to another. The experimental results show that 11 the best performance is achieved by fusing the PGM and semantic segmentation. 12", "title": "" }, { "docid": "89349e8f3e7d8df8bb8ab6f55404a91f", "text": "Due to the high intake of sugars, especially sucrose, global trends in food processing have encouraged producers to use sweeteners, particularly synthetic ones, to a wide extent. For several years, increasing attention has been paid in the literature to the stevia (Stevia rebauidana), containing glycosidic diterpenes, for which sweetening properties have been identified. Chemical composition, nutritional value and application of stevia leaves are briefl y summarized and presented.", "title": "" }, { "docid": "17dce24f26d7cc196e56a889255f92a8", "text": "As known, to finish this book, you may not need to get it at once in a day. Doing the activities along the day may make you feel so bored. If you try to force reading, you may prefer to do other entertaining activities. But, one of concepts we want you to have this book is that it will not make you feel bored. Feeling bored when reading will be only unless you don't like the book. computational principles of mobile robotics really offers what everybody wants.", "title": "" }, { "docid": "2904aaf22b752dfd9eb0589debe355c9", "text": "Because face sketches represent the original faces in a very concise yet recognizable form, they play an important role in criminal investigations, human visual perception, and face biometrics. In this paper, we compared the performances of humans and a principle component analysis (PCA)-based algorithm in recognizing face sketches. A total of 250 sketches of 50 subjects were involved. All of the sketches were drawn manually by five artists (each artist drew 50 sketches, one for each subject). The experiments were carried out by matching sketches in a probe set to photographs in a gallery set. This study resulted in the following findings: 1) A large interartist variation in terms of sketch recognition rate was observed; 2) fusion of the sketches drawn by different artists significantly improved the recognition accuracy of both humans and the algorithm; 3) human performance seems mildly correlated to that of PCA algorithm; 4) humans performed better in recognizing the caricature-like sketches that show various degrees of geometrical distortion or deviation, given the particular data set used; 5) score level fusion with the sum rule worked well in combining sketches, at least for a small number of artists; and 6) the algorithm was superior with the sketches of less distinctive features, while humans seemed more efficient in handling tonality (or pigmentation) cues of the sketches that were not processed with advanced transformation functions.", "title": "" }, { "docid": "b2fc46ec7e2e3ff39bf1224fb6624ef2", "text": "The power of k-means algorithm is due to its computational efficiency and the nature of ease at which it can be used. Distance metrics are used to find similar data objects that lead to develop robust algorithms for the data mining functionalities such as classification and clustering. In this paper, the results obtained by implementing the k-means algorithm using three different metrics Euclidean, Manhattan and Minkowski distance metrics along with the comparative study of results of basic k-means algorithm which is implemented through Euclidian distance metric for two-dimensional data, are discussed. Results are displayed with the help of histograms.", "title": "" }, { "docid": "aae8f73850ae56377d6d3629d6ef0e5b", "text": "The mirror mechanism is a basic brain mechanism that transforms sensory representations of others' behaviour into one's own motor or visceromotor representations concerning that behaviour. According to its location in the brain, it may fulfil a range of cognitive functions, including action and emotion understanding. In each case, it may enable a route to knowledge of others' behaviour, which mainly depends on one's own motor or visceromotor representations.", "title": "" }, { "docid": "da237e14a3a9f6552fc520812073ee6c", "text": "Shock filters are based in the idea to apply locally either a dilation or an erosion process, depending on whether the pixel belongs to the influence zone of a maximum or a minimum. They create a sharp shock between two influence zones and produce piecewise constant segmentations. In this paper we design specific shock filters for the enhancement of coherent flow-like structures. They are based on the idea to combine shock filtering with the robust orientation estimation by means of the structure tensor. Experiments with greyscale and colour images show that these novel filters may outperform previous shock filters as well as coherence-enhancing diffusion filters.", "title": "" }, { "docid": "10c7b7a19197c8562ebee4ae66c1f5e8", "text": "Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, visualization and understanding of GANs is largely missing. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models. In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts with a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. Finally, we examine the contextual relationship between these units and their surrounding by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in the scene. We provide open source interpretation tools to help peer researchers and practitioners better understand their GAN models∗.", "title": "" }, { "docid": "fdc6de60d4564efc3b94b44873ecd179", "text": "Fault detection and diagnosis is an important problem in process engineering. It is the central component of abnormal event management (AEM) which has attracted a lot of attention recently. AEM deals with the timely detection, diagnosis and correction of abnormal conditions of faults in a process. Early detection and diagnosis of process faults while the plant is still operating in a controllable region can help avoid abnormal event progression and reduce productivity loss. Since the petrochemical industries lose an estimated 20 billion dollars every year, they have rated AEM as their number one problem that needs to be solved. Hence, there is considerable interest in this field now from industrial practitioners as well as academic researchers, as opposed to a decade or so ago. There is an abundance of literature on process fault diagnosis ranging from analytical methods to artificial intelligence and statistical approaches. From a modelling perspective, there are methods that require accurate process models, semi-quantitative models, or qualitative models. At the other end of the spectrum, there are methods that do not assume any form of model information and rely only on historic process data. In addition, given the process knowledge, there are different search techniques that can be applied to perform diagnosis. Such a collection of bewildering array of methodologies and alternatives often poses a difficult challenge to any aspirant who is not a specialist in these techniques. Some of these ideas seem so far apart from one another that a non-expert researcher or practitioner is often left wondering about the suitability of a method for his or her diagnostic situation. While there have been some excellent reviews in this field in the past, they often focused on a particular branch, such as analytical models, of this broad discipline. The basic aim of this three part series of papers is to provide a systematic and comparative study of various diagnostic methods from different perspectives. We broadly classify fault diagnosis methods into three general categories and review them in three parts. They are quantitative model-based methods, qualitative model-based methods, and process history based methods. In the first part of the series, the problem of fault diagnosis is introduced and approaches based on quantitative models are reviewed. In the remaining two parts, methods based on qualitative models and process history data are reviewed. Furthermore, these disparate methods will be compared and evaluated based on a common set of criteria introduced in the first part of the series. We conclude the series with a discussion on the relationship of fault diagnosis to other process operations and on emerging trends such as hybrid blackboard-based frameworks for fault diagnosis. # 2002 Published by Elsevier Science Ltd.", "title": "" }, { "docid": "60a9030ddf88347f9a75ce24f52f9768", "text": "The phenotype of patients with a chromosome 1q43q44 microdeletion (OMIM; 612337) is characterized by intellectual disability with no or very limited speech, microcephaly, growth retardation, a recognizable facial phenotype, seizures, and agenesis of the corpus callosum. Comparison of patients with different microdeletions has previously identified ZBTB18 (ZNF238) as a candidate gene for the 1q43q44 microdeletion syndrome. Mutations in this gene have not yet been described. We performed exome sequencing in a patient with features of the 1q43q44 microdeletion syndrome that included short stature, microcephaly, global developmental delay, pronounced speech delay, and dysmorphic facial features. A single de novo non-sense mutation was detected, which was located in ZBTB18. This finding is consistent with an important role for haploinsufficiency of ZBTB18 in the phenotype of chromosome 1q43q44 microdeletions. The corpus callosum is abnormal in mice with a brain-specific knock-out of ZBTB18. Similarly, most (but not all) patients with the 1q43q44 microdeletion syndrome have agenesis or hypoplasia of the corpus callosum. In contrast, the patient with a ZBTB18 point mutation reported here had a structurally normal corpus callosum on brain MRI. Incomplete penetrance or haploinsufficiency of other genes from the critical region may explain the absence of corpus callosum agenesis in this patient with a ZBTB18 point mutation. The findings in this patient with a mutation in ZBTB18 will contribute to our understanding of the 1q43q44 microdeletion syndrome.", "title": "" }, { "docid": "13beac4518bcbce5c0d68eb63e754474", "text": "Alternating direction methods are a common tool for general mathematical programming and optimization. These methods have become particularly important in the field of variational image processing, which frequently requires the minimization of non-differentiable objectives. This paper considers accelerated (i.e., fast) variants of two common alternating direction methods: the Alternating Direction Method of Multipliers (ADMM) and the Alternating Minimization Algorithm (AMA). The proposed acceleration is of the form first proposed by Nesterov for gradient descent methods. In the case that the objective function is strongly convex, global convergence bounds are provided for both classical and accelerated variants of the methods. Numerical examples are presented to demonstrate the superior performance of the fast methods for a wide variety of problems.", "title": "" }, { "docid": "97065954a10665dee95977168b9e6c60", "text": "We describe the current status of Pad++, a zooming graphical interface that we are exploring as an alternative to traditional window and icon-based approaches to interface design. We discuss the motivation for Pad++, describe the implementation, and present prototype applications. In addition, we introduce an informational physics strategy for interface design and briefly compare it with metaphor-based design strategies.", "title": "" }, { "docid": "25121ccd316cd2b9a31c7651a32f92ea", "text": "Chatbot has become an important solution to rapidly increasing customer care demands on social media in recent years. However, current work on chatbot for customer care ignores a key to impact user experience - tones. In this work, we create a novel tone-aware chatbot that generates toned responses to user requests on social media. We first conduct a formative research, in which the effects of tones are studied. Significant and various influences of different tones on user experience are uncovered in the study. With the knowledge of effects of tones, we design a deep learning based chatbot that takes tone information into account. We train our system on over 1.5 million real customer care conversations collected from Twitter. The evaluation reveals that our tone-aware chatbot generates as appropriate responses to user requests as human agents. More importantly, our chatbot is perceived to be even more empathetic than human agents.", "title": "" } ]
scidocsrr
5a7b8d50a7c9c5ad5e3fd04e59d0b3a8
Methods for Reconstructing Causal Networks from Observed Time-Series: Granger-Causality, Transfer Entropy, and Convergent Cross-Mapping
[ { "docid": "800cabf6fbdf06c1f8fc6b65f503e13e", "text": "An information theoretic measure is derived that quantifies the statistical coherence between systems evolving in time. The standard time delayed mutual information fails to distinguish information that is actually exchanged from shared information due to common history and input signals. In our new approach, these influences are excluded by appropriate conditioning of transition probabilities. The resulting transfer entropy is able to distinguish effectively driving and responding elements and to detect asymmetry in the interaction of subsystems.", "title": "" } ]
[ { "docid": "4d14e2a47d68b6113466b1e096c924ee", "text": "In this paper, we experimentally realized a steering antenna using a type of active metamaterial with tunable refractive index. The metamaterial is realized by periodically printed subwavelength metallic resonant patterns with embedded microwave varactors. The effective refractive index can be controlled by low direct-current (dc) bias voltage applied to the varactors. In-phase electromagnetic waves transmitting in different zones of such metamaterial slab experience different phase delays, and, consequently, the output direction of the transmitted wave can be steered with progressive phase shift along the interface. This antenna has a simple structure, is very easy to configure the beam direction, and has a low cost. Compared with conventional phased-array antennas, the radome approach has more flexibility to operate with different feeding antennas for various applications.", "title": "" }, { "docid": "82c0292aa7717aaef617927eb83e07bd", "text": "Deutsch, Feynman, and Manin viewed quantum computing as a kind of universal physical simulation procedure. Much of the writing about quantum Turing machines has shown how these machines can simulate an arbitrary unitary transformation on a finite number of qubits. This interesting problem has been addressed most famously in a paper by Deutsch, and later by Bernstein and Vazirani. Quantum Turing machines form a class closely related to deterministic and probabilistic Turing machines and one might hope to find a universal machine in this class. A universal machine is the basis of a notion of programmability. The extent to which universality has in fact been established by the pioneers in the field is examined and a key notion in theoretical computer science (universality) is scrutinised. In a forthcoming paper, the authors will also consider universality in the quantum gate model.", "title": "" }, { "docid": "6c10d03fa49109182c95c36debaf06cc", "text": "Visual versus near infrared (VIS-NIR) face image matching uses an NIR face image as the probe and conventional VIS face images as enrollment. It takes advantage of the NIR face technology in tackling illumination changes and low-light condition and can cater for more applications where the enrollment is done using VIS face images such as ID card photos. Existing VIS-NIR techniques assume that during classifier learning, the VIS images of each target people have their NIR counterparts. However, since corresponding VIS-NIR image pairs of the same people are not always available, which is often the case, so those methods cannot be applied. To address this problem, we propose a transductive method named transductive heterogeneous face matching (THFM) to adapt the VIS-NIR matching learned from training with available image pairs to all people in the target set. In addition, we propose a simple feature representation for effective VIS-NIR matching, which can be computed in three steps, namely Log-DoG filtering, local encoding, and uniform feature normalization, to reduce heterogeneities between VIS and NIR images. The transduction approach can reduce the domain difference due to heterogeneous data and learn the discriminative model for target people simultaneously. To the best of our knowledge, it is the first attempt to formulate the VIS-NIR matching using transduction to address the generalization problem for matching. Experimental results validate the effectiveness of our proposed method on the heterogeneous face biometric databases.", "title": "" }, { "docid": "604619dd5f23569eaff40eabc8e94f52", "text": "Understanding the causes and effects of species invasions is a priority in ecology and conservation biology. One of the crucial steps in evaluating the impact of invasive species is to map changes in their actual and potential distribution and relative abundance across a wide region over an appropriate time span. While direct and indirect remote sensing approaches have long been used to assess the invasion of plant species, the distribution of invasive animals is mainly based on indirect methods that rely on environmental proxies of conditions suitable for colonization by a particular species. The aim of this article is to review recent efforts in the predictive modelling of the spread of both plant and animal invasive species using remote sensing, and to stimulate debate on the potential use of remote sensing in biological invasion monitoring and forecasting. Specifically, the challenges and drawbacks of remote sensing techniques are discussed in relation to: i) developing species distribution models, and ii) studying life cycle changes and phenological variations. Finally, the paper addresses the open challenges and pitfalls of remote sensing for biological invasion studies including sensor characteristics, upscaling and downscaling in species distribution models, and uncertainty of results.", "title": "" }, { "docid": "6b1bee85de8d95896636bd4e13a69156", "text": "Intrinsically motivated spontaneous exploration is a key enabler of autonomous lifelong learning in human children. It allows them to discover and acquire large repertoires of skills through self-generation, self-selection, self-ordering and self-experimentation of learning goals. We present the unsupervised multi-goal reinforcement learning formal framework as well as an algorithmic approach called intrinsically motivated goal exploration processes (IMGEP) to enable similar properties of autonomous learning in machines. The IMGEP algorithmic architecture relies on several principles: 1) self-generation of goals as parameterized reinforcement learning problems; 2) selection of goals based on intrinsic rewards; 3) exploration with parameterized time-bounded policies and fast incremental goal-parameterized policy search; 4) systematic reuse of information acquired when targeting a goal for improving other goals. We present a particularly efficient form of IMGEP that uses a modular representation of goal spaces as well as intrinsic rewards based on learning progress. We show how IMGEPs automatically generate a learning curriculum within an experimental setup where a real humanoid robot can explore multiple spaces of goals with several hundred continuous dimensions. While no particular target goal is provided to the system beforehand, this curriculum allows the discovery of skills of increasing complexity, that act as stepping stone for learning more complex skills (like nested tool use). We show that learning several spaces of diverse problems can be more efficient for learning complex skills than only trying to directly learn these complex skills. We illustrate the computational efficiency of IMGEPs as these robotic experiments use a simple memory-based low-level policy representations and search algorithm, enabling the whole system to learn online and incrementally on a Raspberry Pi 3.", "title": "" }, { "docid": "8801d5a28a098e1879d60838c1c9f108", "text": "On-line photo sharing services allow users to share their touristic experiences. Tourists can publish photos of interesting locations or monuments visited, and they can also share comments, annotations, and even the GPS traces of their visits. By analyzing such data, it is possible to turn colorful photos into metadata-rich trajectories through the points of interest present in a city. In this paper we propose a novel algorithm for the interactive generation of personalized recommendations of touristic places of interest based on the knowledge mined from photo albums and Wikipedia. The distinguishing features of our approach are multiple. First, the underlying recommendation model is built fully automatically in an unsupervised way and it can be easily extended with heterogeneous sources of information. Moreover, recommendations are personalized according to the places previously visited by the user. Finally, such personalized recommendations can be generated very efficiently even on-line from a mobile device.", "title": "" }, { "docid": "1d8cd516cec4ef74d72fa283059bf269", "text": "Current high-quality object detection approaches use the same scheme: salience-based object proposal methods followed by post-classification using deep convolutional features. This spurred recent research in improving object proposal methods [18, 32, 15, 11, 2]. However, domain agnostic proposal generation has the principal drawback that the proposals come unranked or with very weak ranking, making it hard to trade-off quality for running time. Also, it raises the more fundamental question of whether high-quality proposal generation requires careful engineering or can be derived just from data alone. We demonstrate that learning-based proposal methods can effectively match the performance of hand-engineered methods while allowing for very efficient runtime-quality trade-offs. Using our new multi-scale convolutional MultiBox (MSC-MultiBox) approach, we substantially advance the state-of-the-art on the ILSVRC 2014 detection challenge data set, with 0.5 mAP for a single model and 0.52 mAP for an ensemble of two models. MSC-Multibox significantly improves the proposal quality over its predecessor Multibox [4] method: AP increases from 0.42 to 0.53 for the ILSVRC detection challenge. Finally, we demonstrate improved bounding-box recall compared to Multiscale Combinatorial Grouping [18] with less proposals on the Microsoft-COCO [14] data set.", "title": "" }, { "docid": "b658ff9f576136c12a14ebd9b8aff1d7", "text": "The user expectations for usability and personalization along with decreasing size of handheld devices challenge traditional keypad layout design. We have developed a method for on-line adaptation of a touch pad keyboard layout. The method starts from an original layout and monitors the usage of the keyboard by recording and analyzing the keystrokes. An on-line learning algorithm subtly moves the keys according to the spatial distribution of keystrokes. In consequence, the keyboard matches better to the users physical extensions and grasp of the device, and makes the physical trajectories during typing more comfortable. We present two implementations that apply different vector quantization algorithms to produce an adaptive keyboard with visual on-line feedback. Both qualitative and quantitative results show that the changes in the keyboard are consistent, and related to the user's handedness and hand extensions. The testees found the on-line personalization positive. The method can either be applied for on-line personalization of keyboards or for ergonomics research", "title": "" }, { "docid": "4aa9a5e6b1a3c69282edb61c951308d2", "text": "Grey parrots (Psittacus erithacus) solve various cognitive tasks and acquire and use English speech in ways that often resemble those of very young children. Given that the psittacine brain is organized very differently from that of mammals, these results have intriguing implications for the study and evolution of vocal learning, communication, and cognition. For 25 years, I have taught Grey parrots meaningful use of English speech (e.g., to label objects, colors, shapes, categories, quantities, and absence). Using this code, my oldest subject, Alex, exhibits cognitive capacities comparable to those of marine mammals, apes, and sometimes 4-year-old children (Pepperberg, 1999). Thus, his abilities are inferred not from operant tasks common in animal research, but from vocal responses to vocal questions; that is, he demonstrates intriguing communicative parallels with young humans, despite his evolutionary distance. I doubt I taught Alex and other parrots these abilities de novo; their achievements likely derive from existent cognitive and neurological architectures. My research therefore uses interspecies communication as an investigative tool to unveil avian communicative capacities and an avian perspective on the evolution of communication. SIGNIFICANCE OF INTERSPECIES COMMUNICATION Parrots’ vocal plasticity enables direct interspecies communication (Pepperberg, 1999). But why study their ability to use English rather than their natural system? The answer involves their existent cognitive architecture. I believe parrots acquire those elements of human communication that can be mapped or adapted to their own code. By observing what is or is not acquired, I uncover these elements and interpret the avian system. I believe parrots could not learn aspects of reference (e.g., labels for particular colors, object classes such as “apple”) unless their natural code had such referentiality. Although this manner of determining nonhuman referentiality is inferential, direct determination also has difficulties (see Cheney & Seyfarth, 1992). Moreover, pushing avian systems to see what input engenders exceptional learning (i.e., learning that does not necessarily occur during normal development—in this case, acquiring another species’ code) further elucidates learning processes: Because richer input is needed for a bird to learn another species’ code (allospecific acquisition) than for it to learn its own species’ code (conspecific learning) (Pepperberg, 1999), this line of research can show how and whether “nurture” modifies “nature” (e.g., alters innate predispositions toward conspecific learning), and thus uncover additional mechanisms for, and the extent of, communicative learning. Again, these mechanisms are likely part of existent cognitive architectures, not taught de novo. Interspecies communication also has practical applications. It is a tool that (a) directly states question content—animals need not determine both a query’s aim and the answer via trial and error; (b) exploits research showing that social animals may respond more readily and accurately within ecologically valid social contexts than in other situations; (c) facilitates data comparisons among species, including humans; (d) allows rigorous testing of the acquired communication code that avoids expectation cuing (i.e., subjects must choose responses from their entire repertoire; they cannot expect the answer to come from a subset of choices relevant only to the topic under question); and, most important, (e) is also an open, arbitrary, creative code with enormous signal variety, enabling animals to respond in novel, possibly innovative ways that demonstrate greater competence than operant paradigms’ required responses, and (f) thereby allows examination of the nature and extent of information animals perceive. Interspecies communication facilely demonstrates nonhumans’ inherent capacities and may enable complex learning (Pepperberg, 1999). HOW GREYS LEARN: PARALLELS WITH HUMANS My Greys’ learning sometimes parallels human processes, suggesting insights into how acquisition of complex communication may have evolved. Referential, contextually applicable (functional), and socially rich input allows parrots, like young children, to acquire communication skills effectively (Pepperberg, 1999). Reference is an utterance’s meaning—the relationship between labels and objects to which they refer. Thus, in my research, utterances have reference because the birds are rewarded by being given the objects they label. Context (function) involves the situation in which an utterance is used and effects of its use. The utterances also are functional because they initially are used—and responded to—as requests; this initial use of labels as requests gives birds a reason to learn sounds constituting English labels. Social interaction, which is integral to the research, accents environmental components, emphasizes common attributes—and possible underlying rules—of diverse actions, and allows continuous adjustment of input to learners’ levels. Interaction engages subjects directly, provides contextual explanations for actions, and demonstrates actions’ consequences. In this section, I describe the primary training technique, then experiments my students and I have conducted to determine which input elements are necessary and sufficient to engender learning. Model/Rival Training My model/rival (M/R) training system (background in Pepperberg, 1999) uses three-way social interactions among two humans and a parrot to demonstrate targeted vocal behavior. The parrot observes two humans handling one or more objects, then watches the humans interact: The trainer presents, and queries the human model about, the item (or multiple items) (e.g., “What’s here?” “What color?”) and praises the model and gives him or her the object (or objects) as a referential reward for answers that are correct. Incorrect responses (like the bird may make) are punished by scolding and temporarily removing the item (or items) from sight. Thus, the second human is a model for the parrot’s responses, is its rival for the trainer’s attention, and also illustrates the consequences of making an error: The model is asked to try again or talk more clearly if the response was (deliberately) incorrect or garbled, so the method demonstrates corrective feedback. The bird is also queried and initially rewarded for approximations to “correct” responses. As training progresses, the criteria for what constitutes a correct response become increasingly strict; thus, training is adjusted to the parrot’s level. Unlike other laboratories’ M/R procedures (see Pepperberg, 1999), ours interchanges the roles of trainer and model, and includes the parrot in interactions, to emphasize that one being is not always the questioner and the other the respondent, and that the procedure can effect environmental change. Role reversal also counteracts an earlier methodological problem: Birds whose trainers always maintained their respective roles responded only to the human questioner. Our birds, however, respond to, interact with, and learn from all humans. M/R training exclusively uses intrinsic reinforcers: To ensure the closest possible correlations of labels or concepts to be learned with their appropriate referents, we reward a bird for uttering “X” by giving the bird X (i.e., the object to which the label or concept refers). Earlier unsuccessful programs for teaching birds to communicate with humans used extrinsic rewards (Pepperberg, 1999): The reward was one food that neither related to, nor varied with, the label or concept being taught. Use of extrinsic rewards delays label and concept acquisition because it confounds the label of the targeted exemplar or concept with that of the food. My birds never receive extrinsic rewards. Because Alex sometimes fails to focus on targeted objects, we trained him to say, “I want X” (i.e., to separate labeling and requesting; see Pepperberg, 1999), in order to request the reward he wants. That is, if he identifies something correctly, his reward can be the right to request something more desirable than what he has identified. This procedure provides flexibility but maintains referentiality. Thus, to receive X after identifying Y, Alex must state, “I want X,” and trainers will not comply until the original identification task involving Y is completed. His labels therefore are true identifiers, not merely emotional requests. Adding “want” provides additional advantages: First, trainers can distinguish incorrect labeling from appeals for other items, particularly during testing, when birds unable to use “want” might misidentify objects not because they do not know the correct label but because they are asking for treats, and their performance might reflect a lack of accuracy unrelated to their actual competence. Second, birds may demonstrate low-level intentionality: Alex rarely accepts substitutes when requesting X, and continues his demands (see Pepperberg, 1999), thus showing that he truly intends to obtain X when he says “want X.” Eliminating Aspects of Input M/R training with Alex successfully demonstrated that reference, functionality, and social interaction during training enabled label and concept acquisition, but not which or how many of these elements were necessary, sufficient, or both. What would happen if some of these elements were lacking from the input? Answering that question required training and testing additional parrots, because Alex might cease learning after a change in training merely because there was a change, not necessarily because of the type of change. With 3 new naive Greys—Kyaaro, Alo, and Griffin—students and I performed seven sets of experiments (see Pepperberg, 1999; Pepperberg, Sandefer, Noel, & Ellsworth, 2000) to test the relative importance of reference, functionality, and social interaction in training. In the first set of experiments, we compared simultan", "title": "" }, { "docid": "8e92ade2f4096cbfabd51e018138c2f6", "text": "Recent results by Martin et al. (2014) showed in 3D SPH simulations that tilted discs in binary systems can be unstable to the development of global, damped Kozai–Lidov (KL) oscillations in which the discs exchange tilt for eccentricity. We investigate the linear stability of KL modes for tilted inviscid discs under the approximations that the disc eccentricity is small and the disc remains flat. By using 1D equations, we are able to probe regimes of large ratios of outer to inner disc edge radii that are realistic for binary systems of hundreds of AU separations and are not easily probed by multidimensional simulations. For order unity binary mass ratios, KL instability is possible for a window of disc aspect ratios H/r in the outer parts of a disc that roughly scale as (nb/n) 2 < ∼ H/r< ∼ nb/n, for binary orbital frequency nb and orbital frequency n at the disc outer edge. We present a framework for understanding the zones of instability based on the determination of branches of marginally unstable modes. In general, multiple growing eccentric KL modes can be present in a disc. Coplanar apsidal-nodal precession resonances delineate instability branches. We determine the range of tilt angles for unstable modes as a function of disc aspect ratio. Unlike the KL instability for free particles that involves a critical (minimum) tilt angle, disc instability is possible for any nonzero tilt angle depending on the disc aspect ratio.", "title": "" }, { "docid": "ae585aae554c5fbe4a18f7f2996b7e93", "text": "UNLABELLED\nCaloric restriction occurs when athletes attempt to reduce body fat or make weight. There is evidence that protein needs increase when athletes restrict calories or have low body fat.\n\n\nPURPOSE\nThe aims of this review were to evaluate the effects of dietary protein on body composition in energy-restricted resistance-trained athletes and to provide protein recommendations for these athletes.\n\n\nMETHODS\nDatabase searches were performed from earliest record to July 2013 using the terms protein, and intake, or diet, and weight, or train, or restrict, or energy, or strength, and athlete. Studies (N = 6) needed to use adult (≥ 18 yrs), energy-restricted, resistance-trained (> 6 months) humans of lower body fat (males ≤ 23% and females ≤ 35%) performing resistance training. Protein intake, fat free mass (FFM) and body fat had to be reported.\n\n\nRESULTS\nBody fat percentage decreased (0.5-6.6%) in all study groups (N = 13) and FFM decreased (0.3-2.7kg) in nine of 13. Six groups gained, did not lose, or lost nonsignificant amounts of FFM. Five out of these six groups were among the highest in body fat, lowest in caloric restriction, or underwent novel resistance training stimuli. However, the one group that was not high in body fat that underwent substantial caloric restriction, without novel training stimuli, consumed the highest protein intake out of all the groups in this review (2.5-2.6g/kg).\n\n\nCONCLUSIONS\nProtein needs for energy-restricted resistance-trained athletes are likely 2.3-3.1g/kg of FFM scaled upwards with severity of caloric restriction and leanness.", "title": "" }, { "docid": "f97ed9ef35355feffb1ebf4242d7f443", "text": "Moore’s law has allowed the microprocessor market to innovate at an astonishing rate. We believe microchip implants are the next frontier for the integrated circuit industry. Current health monitoring technologies are large, expensive, and consume significant power. By miniaturizing and reducing power, monitoring equipment can be implanted into the body and allow 24/7 health monitoring. We plan to implement a new transmitter topology, compressed sensing, which can be used for wireless communications with microchip implants. This paper focuses on the ADC used in the compressed sensing signal chain. Using the Cadence suite of tools and a 32/28nm process, we produced simulations of our compressed sensing Analog to Digital Converter to feed into a Digital Compression circuit. Our results indicate that a 12-bit, 20Ksample, 9.8nW Successive Approximation ADC is possible for diagnostic resolution (10 bits). By incorporating a hybrid-C2C DAC with differential floating voltage shields, it is possible to obtain 9.7 ENOB. Thus, we recommend this ADC for use in compressed sensing for biomedical purposes. Not only will it be useful in digital compressed sensing, but this can also be repurposed for use in analog compressed sensing.", "title": "" }, { "docid": "9a29bcb5ca21c33140a199763ab4bc5f", "text": "The Stadtpilot project aims at autonomous driving on Braunschweig's inner city ring road. For this purpose, an autonomous vehicle called “Leonie” has been developed. In October 2010, after two years of research, “Leonie's” abilities were presented in a public demonstration. This vehicle is one of the first worldwide to show the ability of driving autonomously in real urban traffic scenarios. This paper describes the legal issues and the homologation process for driving autonomously in public traffic in Braunschweig, Germany. It also dwells on the Safety Concept, the system architecture and current research activities.", "title": "" }, { "docid": "d60d64c0fe0c6f70ccb1b934915861c2", "text": "This paper presents a single-stage flyback power-factor-correction circuit with a variable boost inductance for high-brightness light-emitting-diode applications for the universal input voltage (90-270 Vrms). The proposed circuit overcomes the limitations of the conventional single-stage PFC flyback with a constant boost inductance, which cannot be designed to achieve a practical bulk-capacitor voltage level (i.e., less than 450 V) at high line while meeting the IEC 61000-3-2 Class C line current harmonic limits at low line. According to the proposed variable boost inductance method, the boost inductance is constant in the high-voltage range and it is reduced in the low-voltage range, resulting in discontinuous-conduction-mode operation and a low total harmonic distortion (THD) in both the high-voltage and low-voltage ranges. Measurements obtained on a 24-V/91-W experimental prototype are as follows: PF = 0.9873, THD = 12%, and efficiency = 88% at nominal low line (120 Vrms); and PF = 0.9474, THD = 10.39%, and efficiency = 91% at nominal high line (230 Vrms). The line current harmonics satisfy the IEC 61000-3-2 Class C limits with enough margin.", "title": "" }, { "docid": "fdbcf90ffeebf9aab41833df0fff23e6", "text": "(Under the direction of Anselmo Lastra) For image synthesis in computer graphics, two major approaches for representing a surface's appearance are texture mapping, which provides spatial detail, such as wallpaper, or wood grain; and the 4D bi-directional reflectance distribution function (BRDF) which provides angular detail, telling how light reflects off surfaces. I combine these two modes of variation to form the 6D spatial bi-directional reflectance distribution function (SBRDF). My compact SBRDF representation simply stores BRDF coefficients at each pixel of a map. I propose SBRDFs as a surface appearance representation for computer graphics and present a complete system for their use. I acquire SBRDFs of real surfaces using a device that simultaneously measures the BRDF of every point on a material. The system has the novel ability to measure anisotropy (direction of threads, scratches, or grain) uniquely at each surface point. I fit BRDF parameters using an efficient nonlinear optimization approach specific to BRDFs. SBRDFs can be rendered using graphics hardware. My approach yields significantly more detailed, general surface appearance than existing techniques for a competitive rendering cost. I also propose an SBRDF rendering method for global illumination using prefiltered environment maps. This improves on existing prefiltered environment map techniques by decoupling the BRDF from the environment maps, so a single set of maps may be used to illuminate the unique BRDFs at each surface point. I demonstrate my results using measured surfaces including gilded wallpaper, plant leaves, upholstery fabrics, wrinkled gift-wrapping paper and glossy book covers. iv To Tiffany, who has worked harder and sacrificed more for this than have I. ACKNOWLEDGMENTS I appreciate the time, guidance and example of Anselmo Lastra, my advisor. I'm grateful to Steve Molnar for being my mentor throughout graduate school. I'm grateful to the other members of my committee, Henry Fuchs, Gary Bishop, and Lars Nyland for helping and teaching me and creating an environment that allows research to be done successfully and pleasantly. I am grateful for the effort and collaboration of Ben Cloward, who masterfully modeled the Carolina Inn lobby, patiently worked with my software, and taught me much of how artists use computer graphics. I appreciate the collaboration of Wolfgang Heidrich, who worked hard on this project and helped me get up to speed on shading with graphics hardware. I'm thankful to Steve Westin, for patiently teaching me a great deal about surface appearance and light measurement. I'm grateful for …", "title": "" }, { "docid": "d5ddc141311afb6050a58be88303b577", "text": "Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster RCNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.", "title": "" }, { "docid": "e632895c1ab1b994f64ef03260b91acb", "text": "The modified Brostrom procedure is commonly recommended for reconstruction of the anterior talofibular ligament (ATF) and calcaneofibular ligament (CF) with an advancement of the inferior retinaculum. However, some surgeons perform the modified Bostrom procedure with an semi-single ATF ligament reconstruction and advancement of the inferior retinaculum for simplicity. This study evaluated the initial stability of the modified Brostrom procedure and compared a two ligaments (ATF + CF) reconstruction group with a semi-single ligament (ATF) reconstruction group. Sixteen paired fresh frozen cadaveric ankle joints were used in this study. The ankle joint laxity was measured on the plane radiographs with 150 N anterior drawer force and 150 N varus stress force. The anterior displacement distances and varus tilt angles were measured before and after cutting the ATF and CF ligaments. A two ligaments (ATF + CF) reconstruction with an advancement of the inferior retinaculum was performed on eight left cadaveric ankles, and an semi-single ligament (ATF) reconstruction with an advancement of the inferior retinaculum was performed on eight right cadaveric ankles. The ankle instability was rechecked after surgery. The decreases in instability of the ankle after surgery were measured and the difference in the decrease was compared using a Mann–Whitney U test. The mean decreases in anterior displacement were 3.4 and 4.0 mm in the two ligaments reconstruction and semi-single ligament reconstruction groups, respectively. There was no significant difference between the two groups (P = 0.489). The mean decreases in the varus tilt angle in the two ligaments reconstruction and semi-single ligament reconstruction groups were 12.6° and 12.2°, respectively. There was no significant difference between the two groups (P = 0.399). In this cadaveric study, a substantial level of initial stability can be obtained using an anatomical reconstruction of the anterior talofibular ligament only and reinforcement with the inferior retinaculum. The modified Brostrom procedure with a semi-single ligament (Anterior talofibular ligament) reconstruction with an advancement of the inferior retinaculum can provide as much initial stability as the two ligaments (Anterior talofibular ligament and calcaneofibular ligament) reconstruction procedure.", "title": "" }, { "docid": "eaf08b7ea5592617fe88bc713c3e874b", "text": "In this paper we propose, implement and evaluate OpenSample: a low-latency, sampling-based network measurement platform targeted at building faster control loops for software-defined networks. OpenSample leverages sFlow packet sampling to provide near-real-time measurements of both network load and individual flows. While OpenSample is useful in any context, it is particularly useful in an SDN environment where a network controller can quickly take action based on the data it provides. Using sampling for network monitoring allows OpenSample to have a 100 millisecond control loop rather than the 1-5 second control loop of prior polling-based approaches. We implement OpenSample in the Floodlight Open Flow controller and evaluate it both in simulation and on a test bed comprised of commodity switches. When used to inform traffic engineering, OpenSample provides up to a 150% throughput improvement over both static equal-cost multi-path routing and a polling-based solution with a one second control loop.", "title": "" }, { "docid": "e3546095a5d0bb39755355c7a3acc875", "text": "We propose to achieve explainable neural machine translation (NMT) by changing the output representation to explain itself. We present a novel approach to NMT which generates the target sentence by monotonically walking through the source sentence. Word reordering is modeled by operations which allow setting markers in the target sentence and move a target-side write head between those markers. In contrast to many modern neural models, our system emits explicit word alignment information which is often crucial to practical machine translation as it improves explainability. Our technique can outperform a plain text system in terms of BLEU score under the recent Transformer architecture on JapaneseEnglish and Portuguese-English, and is within 0.5 BLEU difference on Spanish-English.", "title": "" }, { "docid": "818db2be19d63a64856909dee5d76081", "text": "Generating captions for images is a task that has recently received considerable attention. In this work we focus on caption generation for abstract scenes, or object layouts where the only information provided is a set of objects and their locations. We propose OBJ2TEXT, a sequence-tosequence model that encodes a set of objects and their locations as an input sequence using an LSTM network, and decodes this representation using an LSTM language model. We show that our model, despite encoding object layouts as a sequence, can represent spatial relationships between objects, and generate descriptions that are globally coherent and semantically relevant. We test our approach in a task of object-layout captioning by using only object annotations as inputs. We additionally show that our model, combined with a state-of-the-art object detector, improves an image captioning model from 0.863 to 0.950 (CIDEr score) in the test benchmark of the standard MS-COCO Captioning task.", "title": "" } ]
scidocsrr
15d945238eaeba580d8063e3075ce2d4
A Cognitive Model for the Representation and Acquisition of Verb Selectional Preferences
[ { "docid": "66451aa5a41ec7f9246d749c0983fa60", "text": "A new method for automatically acquiring case frame patterns from large corpora is proposed. In particular, the problem of generalizing values of a case frame slot for a verb is viewed as that of estimating a conditional probability distribution over a partition of words, and a new generalization method based on the Minimum Description Length (MDL) principle is proposed. In order to assist with efficiency, the proposed method makes use of an existing thesaurus and restricts its attention to those partitions that are present as \"cuts\" in the thesaurus tree, thus reducing the generalization problem to that of estimating a \"tree cut model\" of the thesaurus tree. An efficient algorithm is given, which provably obtains the optimal tree cut model for the given frequency data of a case slot, in the sense of MDL. Case frame patterns obtained by the method were used to resolve PP-attachment ambiguity. Experimental results indicate that the proposed method improves upon or is at least comparable with existing methods.", "title": "" } ]
[ { "docid": "66cd5501be682957a2ee10ce91136c01", "text": "The use of inaccurate or outdated database statistics by the query optimizer in a relational DBMS often results in a poor choice of query execution plans and hence unacceptably long query processing times. Configuration and maintenance of these statistics has traditionally been a time-consuming manual operation, requiring that the database administrator (DBA) continually monitor query performance and data changes in order to determine when to refresh the statistics values and when and how to adjust the set of statistics that the DBMS maintains. In this paper we describe the new Automated Statistics Collection (ASC) component of IBM® DB2® Universal DatabaseTM (DB2 UDB). This autonomic technology frees the DBA from the tedious task of manually supervising the collection and maintenance of database statistics. ASC monitors both the update-delete-insert (UDI) activities on the data as well as query feedback (QF), i.e., the results of the queries that are executed on the data. ASC uses these two sources of information to automatically decide which statistics to collect and when to collect them. This combination of UDI-driven and QF-driven autonomic processes ensures that the system can handle unforeseen queries while also ensuring good performance for frequent and important queries. We present the basic concepts, architecture, and key implementation details of ASC in DB2 UDB, and present a case study showing how the use of ASC can speed up a query workload by orders of magnitude without requiring any DBA intervention.", "title": "" }, { "docid": "eb30c6946e802086ac6de5848897a648", "text": "To determine how age of acquisition influences perception of second-language speech, the Speech Perception in Noise (SPIN) test was administered to native Mexican-Spanish-speaking listeners who learned fluent English before age 6 (early bilinguals) or after age 14 (late bilinguals) and monolingual American-English speakers (monolinguals). Results show that the levels of noise at which the speech was intelligible were significantly higher and the benefit from context was significantly greater for monolinguals and early bilinguals than for late bilinguals. These findings indicate that learning a second language at an early age is important for the acquisition of efficient high-level processing of it, at least in the presence of noise.", "title": "" }, { "docid": "3bd94d483a4d3934982d60284a90f4c5", "text": "Internet addiction is an increasing concern among young adults. Self-presentational theory posits that the Internet offers a context in which individuals are able to control their image. Little is known about body image and eating concerns among pathological Internet users. The aim of this study was to explore the association between Internet addiction symptoms, body image esteem, body image avoidance, and disordered eating. A sample of 392 French young adults (68 percent women) completed an online questionnaire assessing time spent online, Internet addiction symptoms, disordered eating, and body image avoidance. Fourteen men (11 percent) and 26 women (9.7 percent) reported Internet addiction. Body image avoidance was associated with Internet addiction symptoms among both genders. Controlling for body-mass index, Internet addiction symptoms, and body image avoidance were both significant predictors of disordered eating among women. These findings support the self-presentational theory of Internet addiction and suggest that body image avoidance is an important factor.", "title": "" }, { "docid": "f4db297c70b1aba64ce3ed17b0837859", "text": "Despite the success of the automatic speech recognition framework in its own application field, its adaptation to the problem of acoustic event detection has resulted in limited success. In this paper, instead of treating the problem similar to the segmentation and classification tasks in speech recognition, we pose it as a regression task and propose an approach based on random forest regression. Furthermore, event localization in time can be efficiently handled as a joint problem. We first decompose the training audio signals into multiple interleaved superframes which are annotated with the corresponding event class labels and their displacements to the temporal onsets and offsets of the events. For a specific event category, a random-forest regression model is learned using the displacement information. Given an unseen superframe, the learned regressor will output the continuous estimates of the onset and offset locations of the events. To deal with multiple event categories, prior to the category-specific regression phase, a superframe-wise recognition phase is performed to reject the background superframes and to classify the event superframes into different event categories. While jointly posing event detection and localization as a regression problem is novel, the superior performance on two databases ITC-Irst and UPC-TALP demonstrates the efficiency and potential of the proposed approach.", "title": "" }, { "docid": "47785d2cbbc5456c0a2c32c329498425", "text": "Are there important cyclical fluctuations in bond market premiums and, if so, with what macroeconomic aggregates do these premiums vary? We use the methodology of dynamic factor analysis for large datasets to investigate possible empirical linkages between forecastable variation in excess bond returns and macroeconomic fundamentals. We find that “real” and “inflation” factors have important forecasting power for future excess returns on U.S. government bonds, above and beyond the predictive power contained in forward rates and yield spreads. This behavior is ruled out by commonly employed affine term structure models where the forecastability of bond returns and bond yields is completely summarized by the cross-section of yields or forward rates. An important implication of these findings is that the cyclical behavior of estimated risk premia in both returns and long-term yields depends importantly on whether the information in macroeconomic factors is included in forecasts of excess bond returns. Without the macro factors, risk premia appear virtually acyclical, whereas with the estimated factors risk premia have a marked countercyclical component, consistent with theories that imply investors must be compensated for risks associated with macroeconomic activity. ( JEL E0, E4, G10, G12)", "title": "" }, { "docid": "8dbb1906440f8a2a2a0ddf51527bb891", "text": "Recent studies have shown that people prefer to age in their familiar environments, thus guiding designers to provide a safe and functionally appropriate environment for ageing people, regardless of their physical conditions or limitations. Therefore, a participatory design model is proposed where human beings can improve their quality of life by promoting independence, as well as safety, useability and attractiveness of the residence. Brainstorming, scenario building, unstructured interviews, sketching and videotaping are used as techniques in the participatory design sessions. Quality deployment matrices are employed to find the relationships between the elderly user's requirements and design specifications. A case study was devised to apply and test the conceptual model phase of the proposed model.", "title": "" }, { "docid": "caf866341ad9f74b1ac1dc8572f6e95c", "text": "One important but often overlooked aspect of human contexts of ubiquitous computing environment is human’s emotional status. And, there are no realistic and robust humancentric contents services so far, because there are few considers about combining context awareness computing with wearable computing for improving suitability of contents to each user’s needs. In this paper, we discuss combining context awareness computing with wearable computing to develop more effective personalized services. And we propose new algorithms to develop efficiently personalized emotion based content service system.", "title": "" }, { "docid": "078843d5dbede66b6148c4d0d269b176", "text": "A randomized control trial was performed to evaluate the effectiveness and safety of absorbable polymeric clips for appendicular stump closure in laparoscopic appendectomy (LA). Patients were randomly enrolled into an experimental group (ligation of the appendicular base with Lapro-Clips, L-C group) or control group (ligation of the appendicular base with Hem-o-lok Clips, H-C group). We identified 1,100 patients who underwent LA between April 1, 2012 and February 3, 2015. Overall, 99 patients (9.0%, 99/1,100) developed a complication following LA (47 [8.5%] in the L-C group and 52 [9.5%] in the H-C group (P = 0.598). No statistically significant differences were observed in intra-abdominal abscesses, stump leakage, superficial wound infections, post-operative abdominal pain, overall adverse events, or the duration of the operations and hospital stays between the groups (all p > 0.05). Adverse risk factors associated with the use of absorbable clips in LA included body mass index ≥ 27.5 kg/m2, diabetes, American Society of Anesthesiologists degree ≥ III, gangrenous appendicitis, severe inflammation of the appendix base, appendix perforation, and the absence of peritoneal drainage. The results indicate that the Lapro-Clip is a safe and effective device for closing the appendicular stump in LA in select patients with appendicitis.", "title": "" }, { "docid": "86d1b98d64037a2ce992cdbfa4b908b4", "text": "This letter studies the transmission characteristics of coplanar waveguides (CPWs) loaded with single-layer S-shaped split-ring resonators (S-SRRs) for the first time. Two structures are analyzed: 1) a CPW simply loaded with an S-SRR, and 2) a CPW loaded with an S-SRR and a series gap. The former exhibits a stopband functionality related to the resonance of the S-SRR excited by the contra-directional magnetic fluxes through the two connected resonator loops; the latter is useful for the implementation of compact bandpass filters. In both cases, a lumped-element equivalent circuit model is proposed with an unequivocal physical interpretation of the circuit elements. These circuits are then validated by comparing the circuit response with extracted parameters to full-wave electromagnetic simulations. The last part of the letter illustrates application of the S-SRR/gap-loaded CPW unit cell to the design of a bandpass filter. The resulting filter is very compact and exhibits competitive performance.", "title": "" }, { "docid": "c8e83a1eb803d9e091c2cb3418577aa7", "text": "We review the literature on pathological narcissism and narcissistic personality disorder (NPD) and describe a significant criterion problem related to four inconsistencies in phenotypic descriptions and taxonomic models across clinical theory, research, and practice; psychiatric diagnosis; and social/personality psychology. This impedes scientific synthesis, weakens narcissism's nomological net, and contributes to a discrepancy between low prevalence rates of NPD and higher rates of practitioner-diagnosed pathological narcissism, along with an enormous clinical literature on narcissistic disturbances. Criterion issues must be resolved, including clarification of the nature of normal and pathological narcissism, incorporation of the two broad phenotypic themes of narcissistic grandiosity and narcissistic vulnerability into revised diagnostic criteria and assessment instruments, elimination of references to overt and covert narcissism that reify these modes of expression as distinct narcissistic types, and determination of the appropriate structure for pathological narcissism. Implications for the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders and the science of personality disorders are presented.", "title": "" }, { "docid": "ab4abd9033f87e08656f4363499bc09c", "text": "It is well known that, for most datasets, the use of large-size minibatches for Stochastic Gradient Descent (SGD) typically leads to slow convergence and poor generalization. On the other hand, large minibatches are of great practical interest as they allow for a better exploitation of modern GPUs. Previous literature on the subject concentrated on how to adjust the main SGD parameters (in particular, the learning rate) when using large minibatches. In this work we introduce an additional feature, that we call minibatch persistency, that consists in reusing the same minibatch for K consecutive SGD iterations. The computational conjecture here is that a large minibatch contains a significant sample of the training set, so one can afford to slightly overfitting it without worsening generalization too much. The approach is intended to speedup SGD convergence, and also has the advantage of reducing the overhead related to data loading on the internal GPU memory. We present computational results on CIFAR-10 with an AlexNet architecture, showing that even small persistency values (K = 2 or 5) already lead to a significantly faster convergence and to a comparable (or even better) generalization than the standard “disposable minibatch” approach (K = 1), in particular when large minibatches are used. The lesson learned is that minibatch persistency can be a simple yet effective way to deal with large minibatches.", "title": "" }, { "docid": "b80df19e67d2bbaabf4da18d7b5af4e2", "text": "This paper presents a data-driven approach for automatically generating cartoon faces in different styles from a given portrait image. Our stylization pipeline consists of two steps: an offline analysis step to learn about how to select and compose facial components from the databases; a runtime synthesis step to generate the cartoon face by assembling parts from a database of stylized facial components. We propose an optimization framework that, for a given artistic style, simultaneously considers the desired image-cartoon relationships of the facial components and a proper adjustment of the image composition. We measure the similarity between facial components of the input image and our cartoon database via image feature matching, and introduce a probabilistic framework for modeling the relationships between cartoon facial components. We incorporate prior knowledge about image-cartoon relationships and the optimal composition of facial components extracted from a set of cartoon faces to maintain a natural, consistent, and attractive look of the results. We demonstrate generality and robustness of our approach by applying it to a variety of portrait images and compare our output with stylized results created by artists via a comprehensive user study.", "title": "" }, { "docid": "da979af4e34855b9be1ce906acdd16e9", "text": "Learning analytics is the analysis of electronic learning data which allows teachers, course designers and administrators of virtual learning environments to search for unobserved patterns and underlying information in learning processes. The main aim of learning analytics is to improve learning outcomes and the overall learning process in electronic learning virtual classrooms and computer-supported education. The most basic unit of learning data in virtual learning environments for learning analytics is the interaction, but there is no consensus yet on which interactions are relevant for effective learning. Drawing upon extant literature, this research defines three system-independent classifications of interactions and evaluates the relation of their components with academic performance across two different learning modalities: virtual learning environment (VLE) supported face-to-face (F2F) and online learning. In order to do so, we performed an empirical study with data from six online and two VLE-supported F2F courses. Data extraction and analysis required the development of an ad hoc tool based on the proposed interaction classification. The main finding from this research is that, for each classification, there is a relation between some type of interactions and academic performance in online courses, whereas this relation is non-significant in the case of VLE-supported F2F courses. Implications for theory and practice are dis-", "title": "" }, { "docid": "564ec6a2d5748afc83592ac0371a3ead", "text": "Fine-grained vehicle classiflcation is a challenging task due to the subtle differences between vehicle classes. Several successful approaches to fine-grained image classification rely on part-based models, where the image is classified according to discriminative object parts. Such approaches require however that parts in the training images be manually annotated, a laborintensive process. We propose a convolutional architecture realizing a transform network capable of discovering the most discriminative parts of a vehicle at multiple scales. We experimentally show that our architecture outperforms a baseline reference if trained on class labels only, and performs closely to a reference based on a part-model if trained on loose vehicle localization bounding boxes.", "title": "" }, { "docid": "714843ca4a3c99bfc95e89e4ff82aeb1", "text": "The development of new technologies for mapping structural and functional brain connectivity has led to the creation of comprehensive network maps of neuronal circuits and systems. The architecture of these brain networks can be examined and analyzed with a large variety of graph theory tools. Methods for detecting modules, or network communities, are of particular interest because they uncover major building blocks or subnetworks that are particularly densely connected, often corresponding to specialized functional components. A large number of methods for community detection have become available and are now widely applied in network neuroscience. This article first surveys a number of these methods, with an emphasis on their advantages and shortcomings; then it summarizes major findings on the existence of modules in both structural and functional brain networks and briefly considers their potential functional roles in brain evolution, wiring minimization, and the emergence of functional specialization and complex dynamics.", "title": "" }, { "docid": "90897878038ac7cd3a51fdfa3397ce9f", "text": "A fundamental operation in many vision tasks, including motion understanding, stereopsis, visual odometry, or invariant recognition, is establishing correspondences between images or between images and data from other modalities. We present an analysis of the role that multiplicative interactions play in learning such correspondences, and we show how learning and inferring relationships between images can be viewed as detecting rotations in the eigenspaces shared among a set of orthogonal matrices. We review a variety of recent multiplicative sparse coding methods in light of this observation. We also review how the squaring operation performed by energy models and by models of complex cells can be thought of as a way to implement multiplicative interactions. This suggests that the main utility of including complex cells in computational models of vision may be that they can encode relations not invariances.", "title": "" }, { "docid": "34d8b9fa5159e161ee0050900be4fa62", "text": "Singular value decomposition (SVD), together with the expectation-maximization (EM) procedure, can be used to find a low-dimension model that maximizes the log-likelihood of observed ratings in recommendation systems. However, the computational cost of this approach is a major concern, since each iteration of the EM algorithm requires a new SVD computation. We present a novel algorithm that incorporates SVD approximation into the EM procedure to reduce the overall computational cost while maintaining accurate predictions. Furthermore, we propose a new framework for collaborating filtering in distributed recommendation systems that allows users to maintain their own rating profiles for privacy. A server periodically collects aggregate information from those users that are online to provide predictions for all users. Both theoretical analysis and experimental results show that this framework is effective and achieves almost the same prediction performance as that of centralized systems.", "title": "" }, { "docid": "d161ab557edb4268a0ebc606bb9dbcb6", "text": "Recommender systems play an important role in reducing the negative impact of information overload on those websites where users have the possibility of voting for their preferences on Ítems. The most normal technique for dealing with the recommendation mechanism is to use collaborative filtering, in which it is essential to discover the most similar users to whom you desire to make recommendations. The hypothesis of this paper is that the results obtained by applying traditional similarities measures can be improved by taking contextual information, drawn from the entire body of users, and using it to calcúlate the singularity which exists, for each item, in the votes cast by each pair of users that you wish to compare. As such, the greater the measure of singularity result between the votes cast by two given users, the greater the impact this will have on the similarity. The results, tested on the Movielens, Netflix and FilmAffinity databases, corrobórate the excellent behaviour of the singularity measure proposed.", "title": "" }, { "docid": "83060ef5605b19c14d8b0f41cbd61de5", "text": "We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to manydifferent learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a singlealgorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain “summation form,” which allows them to be easily parallelized on multicore computers. We adapt Google’s map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.", "title": "" } ]
scidocsrr
155777b9568aa560cf4167a14c89cb13
Probabilistic Relations between Words : Evidence from Reduction in Lexical Production
[ { "docid": "187595fb12a5ca3bd665ffbbc9f47465", "text": "In order to acquire a lexicon, young children must segment speech into words, even though most words are unfamiliar to them. This is a non-trivial task because speech lacks any acoustic analog of the blank spaces between printed words. Two sources of information that might be useful for this task are distributional regularity and phonotactic constraints. Informally, distributional regularity refers to the intuition that sound sequences that occur frequently and in a variety of contexts are better candidates for the lexicon than those that occur rarely or in few contexts. We express that intuition formally by a class of functions called DR functions. We then put forth three hypotheses: First, that children segment using DR functions. Second, that they exploit phonotactic constraints on the possible pronunciations of words in their language. Specifically, they exploit both the requirement that every word must have a vowel and the constraints that languages impose on word-initial and word-final consonant clusters. Third, that children learn which word-boundary clusters are permitted in their language by assuming that all permissible word-boundary clusters will eventually occur at utterance boundaries. Using computational simulation, we investigate the effectiveness of these strategies for segmenting broad phonetic transcripts of child-directed English. The results show that DR functions and phonotactic constraints can be used to significantly improve segmentation. Further, the contributions of DR functions and phonotactic constraints are largely independent, so using both yields better segmentation than using either one alone. Finally, learning the permissible word-boundary clusters from utterance boundaries does not degrade segmentation performance.", "title": "" } ]
[ { "docid": "ca7269b97464c9b78aa0cb6727926e28", "text": "This paper argues that there has not been enough discussion in the field of applications of Gaussian Process for the fast moving consumer goods industry. Yet, this technique can be important as it e.g., can provide automatic feature relevance determination and the posterior mean can unlock insights on the data. Significant challenges are the large size and high dimensionality of commercial data at a point of sale. The study reviews approaches in the Gaussian Processes modeling for large data sets, evaluates their performance on commercial sales and shows value of this type of models as a decision-making tool for management.", "title": "" }, { "docid": "def621d47a8ead24754b1eebe590314a", "text": "Existing social-aware routing protocols for packet switched networks make use of the information about the social structure of the network deduced by state information of nodes (e.g., history of past encounters) to optimize routing. Although these approaches are shown to have superior performance to social-oblivious, stateless routing protocols (BinarySW, Epidemic), the improvement comes at the cost of considerable storage overhead required on the nodes. In this paper we present SANE, the first routing mechanism that combines the advantages of both social-aware and stateless approaches. SANE is based on the observation - that we validate on a real-world trace - that individuals with similar interests tend to meet more often. In SANE, individuals (network members) are characterized by their interest profile, a compact representation of their interests. By implementing a simple routing rule based on interest profile similarity, SANE is free of network state information, thus overcoming the storage capacity problem with existing social-aware approaches. Through thorough experiments, we show the superiority of SANE over existing approaches, both stateful, social-aware and stateless, social-oblivious. We discuss the statelessness of our approach in the supplementary file, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TPDS.2014.2307857, of this manuscript. Our interest-based approach easily enables innovative networking services, such as interest-casting. An interest-casting protocol is also introduced in this paper, and evaluated through experiments based on both real-world and synthetic mobility traces.", "title": "" }, { "docid": "ebaedd43e151f13d1d4d779284af389d", "text": "This paper presents the state of art techniques in recommender systems (RS). The various techniques are diagrammatically illustrated which on one hand helps a naïve researcher in this field to accommodate the on-going researches and establish a strong base, on the other hand it focuses on different categories of the recommender systems with deep technical discussions. The review studies on RS are highlighted which helps in understanding the previous review works and their directions. 8 different main categories of recommender techniques and 19 sub categories have been identified and stated. Further, soft computing approach for recommendation is emphasized which have not been well studied earlier. The major problems of the existing area is reviewed and presented from different perspectives. However, solutions to these issues are rarely discussed in the previous works, in this study future direction for possible solutions are also addressed.", "title": "" }, { "docid": "43ee3d818b528081aadf6abdc23650fa", "text": "Cloud computing has become an increasingly important research topic given the strong evolution and migration of many network services to such computational environment. The problem that arises is related with efficiency management and utilization of the large amounts of computing resources. This paper begins with a brief retrospect of traditional scheduling, followed by a detailed review of metaheuristic algorithms for solving the scheduling problems by placing them in a unified framework. Armed with these two technologies, this paper surveys the most recent literature about metaheuristic scheduling solutions for cloud. In addition to applications using metaheuristics, some important issues and open questions are presented for the reference of future researches on scheduling for cloud.", "title": "" }, { "docid": "8b1b0ee79538a1f445636b0798a0c7ca", "text": "Much of the current activity in the area of intelligent vehicle-highway systems (IVHS) focuses on one simple objective: to collect more data. Clearly, improvements in sensor technology and communication systems will allow transportation agencies to more closely monitor the condition of the surface transportation system. However, monitoring alone cannot improve the safety or efficiency of the system. It is imperative that surveillance data be used to manage the system in a proactive rather than a reactive manner. 'Proactive traffic management will require the ability to predict traffic conditions. Previous predictive modeling approaches can be grouped into three categories: (a) historical, data-based algorithms; (b) time-series models; and (c) simulations. A relatively new mathematical model, the neural network, offers an attractive alternative because neural networks can model undefined, complex nonlinear surfaces. In a comparison of a backpropagation neural network model with the more traditional approaches of an historical, data-based algorithm and a time-series model, the backpropagation model· was clearly superior, although all three models did an adequate job of predicting future traffic volumes. The backpropagation model was more responsive to dynamic conditions than the historical, data-based algorithm, and it did not experience the lag and overprediction characteristics of the time-series model. Given these advantages and the backpropagation model's ability to run in a parallel computing environment, it appears that such neural network prediction models hold considerable potential for use in real-time IVHS applications.", "title": "" }, { "docid": "4688caf6a80463579f293b2b762da5b5", "text": "To accelerate the implementation of network functions/middle boxes and reduce the deployment cost, recently, the concept of network function virtualization (NFV) has emerged and become a topic of much interest attracting the attention of researchers from both industry and academia. Unlike the traditional implementation of network functions, a software-oriented approach for virtual network functions (VNFs) creates more flexible and dynamic network services to meet a more diversified demand. Software-oriented network functions bring along a series of research challenges, such as VNF management and orchestration, service chaining, VNF scheduling for low latency and efficient virtual network resource allocation with NFV infrastructure, among others. In this paper, we study the VNF scheduling problem and the corresponding resource optimization solutions. Here, the VNF scheduling problem is defined as a series of scheduling decisions for network services on network functions and activating the various VNFs to process the arriving traffic. We consider VNF transmission and processing delays and formulate the joint problem of VNF scheduling and traffic steering as a mixed integer linear program. Our objective is to minimize the makespan/latency of the overall VNFs' schedule. Reducing the scheduling latency enables cloud operators to service (and admit) more customers, and cater to services with stringent delay requirements, thereby increasing operators' revenues. Owing to the complexity of the problem, we develop a genetic algorithm-based method for solving the problem efficiently. Finally, the effectiveness of our heuristic algorithm is verified through numerical evaluation. We show that dynamically adjusting the bandwidths on virtual links connecting virtual machines, hosting the network functions, reduces the schedule makespan by 15%-20% in the simulated scenarios.", "title": "" }, { "docid": "6bc2837d4d1da3344f901a6d7d8502b5", "text": "Many researchers and professionals have reported nonsubstance addiction to online entertainments in adolescents. However, very few scales have been designed to assess problem Internet use in this population, in spite of their high exposure and obvious vulnerability. The aim of this study was to review the currently available scales for assessing problematic Internet use and to validate a new scale of this kind for use, specifically in this age group, the Problematic Internet Entertainment Use Scale for Adolescents. The research was carried out in Spain in a gender-balanced sample of 1131 high school students aged between 12 and 18 years. Psychometric analyses showed the scale to be unidimensional, with excellent internal consistency (Cronbach's alpha of 0.92), good construct validity, and positive associations with alternative measures of maladaptive Internet use. This self-administered scale can rapidly measure the presence of symptoms of behavioral addiction to online videogames and social networking sites, as well as their degree of severity. The results estimate the prevalence of this problematic behavior in Spanish adolescents to be around 5 percent.", "title": "" }, { "docid": "95afd1d83b5641a7dff782588348d2ec", "text": "Intensive repetitive therapy improves function and quality of life for stroke patients. Intense therapies to overcome upper extremity impairment are beneficial, however, they are expensive because, in part, they rely on individualized interaction between the patient and rehabilitation specialist. The development of a pneumatic muscle driven hand therapy device, the Mentor/spl trade/, reinforces the need for volitional activation of joint movement while concurrently offering knowledge of results about range of motion, muscle activity or resistance to movement. The device is well tolerated and has received favorable comments from stroke survivors, their caregivers, and therapists.", "title": "" }, { "docid": "f767e0a9711522b06b8d023453f42f3a", "text": "A novel low-cost method for generating circular polarization in a dielectric resonator antenna is proposed. The antenna comprises four rectangular dielectric layers, each one being rotated by an angle of 30 ° relative to its adjacent layers. Utilizing such an approach has provided a circular polarization over a bandwidth of 6% from 9.55 to 10.15 GHz. This has been achieved in conjunction with a 21% impedance-matching bandwidth over the same frequency range. Also, the radiation efficiency of the proposed circularly polarized dielectric resonator antenna is 93% in this frequency band of operation", "title": "" }, { "docid": "54a06cb39007b18833f191aeb7c600d7", "text": "Mobile ad-hoc networks (MANETs) and wireless sensor networks (WSNs) have gained remarkable appreciation and technological development over the last few years. Despite ease of deployment, tremendous applications and significant advantages, security has always been a challenging issue due to the nature of environments in which nodes operate. Nodes’ physical capture, malicious or selfish behavior cannot be detected by traditional security schemes. Trust and reputation based approaches have gained global recognition in providing additional means of security for decision making in sensor and ad-hoc networks. This paper provides an extensive literature review of trust and reputation based models both in sensor and ad-hoc networks. Based on the mechanism of trust establishment, we categorize the state-of-the-art into two groups namely node-centric trust models and system-centric trust models. Based on trust evidence, initialization, computation, propagation and weight assignments, we evaluate the efficacy of the existing schemes. Finally, we conclude our discussion with identification of some unresolved issues in pursuit of trust and reputation management.", "title": "" }, { "docid": "81919bc432dd70ed3e48a0122d91b9e4", "text": "Artemisinin resistance in Plasmodium falciparum has emerged as a major threat for malaria control and elimination worldwide. Mutations in the Kelch propeller domain of PfK13 are the only known molecular markers for artemisinin resistance in this parasite. Over 100 non-synonymous mutations have been identified in PfK13 from various malaria endemic regions. This study aimed to investigate the genetic diversity of PvK12, the Plasmodium vivax ortholog of PfK13, in parasite populations from Southeast Asia, where artemisinin resistance in P. falciparum has emerged. The PvK12 sequences in 120 P. vivax isolates collected from Thailand (22), Myanmar (32) and China (66) between 2004 and 2008 were obtained and 353 PvK12 sequences from worldwide populations were retrieved for further analysis. These PvK12 sequences revealed a very low level of genetic diversity (π = 0.00003) with only three single nucleotide polymorphisms (SNPs). Of these three SNPs, only G581R is nonsynonymous. The synonymous mutation S88S is present in 3% (1/32) of the Myanmar samples, while G704G and G581R are present in 1.5% (1/66) and 3% (2/66) of the samples from China, respectively. None of the mutations observed in the P. vivax samples were associated with artemisinin resistance in P. falciparum. Furthermore, analysis of 473 PvK12 sequences from twelve worldwide P. vivax populations confirmed the very limited polymorphism in this gene and detected only five distinct haplotypes. The PvK12 sequences from global P. vivax populations displayed very limited genetic diversity indicating low levels of baseline polymorphisms of PvK12 in these areas.", "title": "" }, { "docid": "8582c4a040e4dec8fd141b00eaa45898", "text": "Emerging airborne networks require domainspecific routing protocols to cope with the challenges faced by the highly-dynamic aeronautical environment. We present an ns-3 based performance comparison of the AeroRP protocol with conventional MANET routing protocols. To simulate a highly-dynamic airborne network, accurate mobility models are needed for the physical movement of nodes. The fundamental problem with many synthetic mobility models is their random, memoryless behavior. Airborne ad hoc networks require a flexible memory-based 3-dimensional mobility model. Therefore, we have implemented a 3-dimensional Gauss-Markov mobility model in ns-3 that appears to be more realistic than memoryless models such as random waypoint and random walk. Using this model, we are able to simulate the airborne networking environment with greater realism than was previously possible and show that AeroRP has several advantages over other MANET routing protocols.", "title": "" }, { "docid": "cc58f5adcf4cb0aa1feac0ef96c452b5", "text": "Machine-learning algorithms have shown outstanding image recognition/classification performance for computer vision applications. However, the compute and energy requirement for implementing such classifier models for large-scale problems is quite high. In this paper, we propose feature driven selective classification (FALCON) inspired by the biological visual attention mechanism in the brain to optimize the energy-efficiency of machine-learning classifiers. We use the consensus in the characteristic features (color/texture) across images in a dataset to decompose the original classification problem and construct a tree of classifiers (nodes) with a generic-to-specific transition in the classification hierarchy. The initial nodes of the tree separate the instances based on feature information and selectively enable the latter nodes to perform object specific classification. The proposed methodology allows selective activation of only those branches and nodes of the classification tree that are relevant to the input while keeping the remaining nodes idle. Additionally, we propose a programmable and scalable neuromorphic engine (NeuE) that utilizes arrays of specialized neural computational elements to execute the FALCON-based classifier models for diverse datasets. The structure of FALCON facilitates the reuse of nodes while scaling up from small classification problems to larger ones thus allowing us to construct classifier implementations that are significantly more efficient. We evaluate our approach for a 12-object classification task on the Caltech101 dataset and ten-object task on CIFAR-10 dataset by constructing FALCON models on the NeuE platform in 45-nm technology. Our results demonstrate up to $3.66\\boldsymbol \\times $ improvement in energy-efficiency for no loss in output quality, and even higher improvements of up to $5.91\\boldsymbol \\times $ with 3.9% accuracy loss compared to an optimized baseline network. In addition, FALCON shows an improvement in training time of up to $1.96\\boldsymbol \\times $ as compared to the traditional classification approach.", "title": "" }, { "docid": "78f2e1fc79a9c774e92452631d6bce7a", "text": "Adders are basic integral part of arithmetic circuits. The adders have been realized with two styles: fixed stage size and variable stage size. In this paper, fixed stage and variable stage carry skip adder configurations have been analyzed and then a new 16-bit high speed variable stage carry skip adder is proposed by modifying the existing structure. The proposed adder has seven stages where first and last stage are of 1 bit each, it keeps increasing steadily till the middle stage which is the bulkiest and hence is the nucleus stage. The delay and power consumption in the proposed adder is reduced by 61.75% and 8% respectively. The proposed adder is implemented and simulated using 90 nm CMOS technology in Cadence Virtuoso. It is pertinent to mention that the delay improvement in the proposed adder has been achieved without increase in any power consumption and circuit complexity. The adder proposed in this work is suitable for high speed and low power VLSI based arithmetic circuits.", "title": "" }, { "docid": "b8fa0ff5dc0b700c1f7dd334639572ec", "text": "This paper discusses about an ongoing project that serves the needs of people with physical disabilities at home. It uses the Bluetooth technology to establish communication between user's Smartphone and controller board. The prototype support manual controlling and microcontroller controlling to lock and unlock home door. By connecting the circuit with a relay board and connection to the Arduino controller board it can be controlled by a Bluetooth available to provide remote access from tablet or smartphone. This paper addresses the development and the functionality of the Android-based application (Android app) to assist disabled people gain control of their living area.", "title": "" }, { "docid": "5bece01bed7c5a9a2433d95379882a37", "text": "n The polarization of electromagnetic signals is an important feature in the design of modern radar and telecommunications. Standard electromagnetic theory readily shows that a linearly polarized plane wave propagating in free space consists of two equal but counter-rotating components of circular polarization. In magnetized media, these circular modes can be arranged to produce the nonreciprocal propagation effects that are the basic properties of isolator and circulator devices. Independent phase control of right-hand (+) and left-hand (–) circular waves is accomplished by splitting their propagation velocities through differences in the e ± m ± parameter. A phenomenological analysis of the permeability m and permittivity e in dispersive media serves to introduce the corresponding magneticand electric-dipole mechanisms of interaction length with the propagating signal. As an example of permeability dispersion, a Lincoln Laboratory quasi-optical Faradayrotation isolator circulator at 35 GHz (l ~ 1 cm) with a garnet-ferrite rotator element is described. At infrared wavelengths (l = 1.55 mm), where fiber-optic laser sources also require protection by passive isolation of the Faraday-rotation principle, e rather than m provides the dispersion, and the frequency is limited to the quantum energies of the electric-dipole atomic transitions peculiar to the molecular structure of the magnetic garnet. For optimum performance, bismuth additions to the garnet chemical formula are usually necessary. Spectroscopic and molecular theory models developed at Lincoln Laboratory to explain the bismuth effects are reviewed. In a concluding section, proposed advances in present technology are discussed in the context of future radar and telecommunications challenges.", "title": "" }, { "docid": "1ff5526e4a18c1e59b63a3de17101b11", "text": "Plug-in electric vehicles (PEVs) are equipped with onboard level-1 or level-2 chargers for home overnight or office daytime charging. In addition, off-board chargers can provide fast charging for traveling long distances. However, off-board high-power chargers are bulky, expensive, and require comprehensive evolution of charging infrastructures. An integrated onboard charger capable of fast charging of PEVs will combine the benefits of both the conventional onboard and off-board chargers, without additional weight, volume, and cost. In this paper, an innovative single-phase integrated charger, using the PEV propulsion machine and its traction converter, is introduced. The charger topology is capable of power factor correction and battery voltage/current regulation without any bulky add-on components. Ac machine windings are utilized as mutually coupled inductors, to construct a two-channel interleaved boost converter. The circuit analyses of the proposed technology, based on a permanent magnet synchronous machine (PMSM), are discussed in details. Experimental results of a 3-kW proof-of-concept prototype are carried out using a ${\\textrm{220-V}}_{{\\rm{rms}}}$, 3-phase, 8-pole PMSM. A nearly unity power factor and 3.96% total harmonic distortion of input ac current are acquired with a maximum efficiency of 93.1%.", "title": "" }, { "docid": "47db0fdd482014068538a00f7dc826a9", "text": "Importance\nThe use of palliative care programs and the number of trials assessing their effectiveness have increased.\n\n\nObjective\nTo determine the association of palliative care with quality of life (QOL), symptom burden, survival, and other outcomes for people with life-limiting illness and for their caregivers.\n\n\nData Sources\nMEDLINE, EMBASE, CINAHL, and Cochrane CENTRAL to July 2016.\n\n\nStudy Selection\nRandomized clinical trials of palliative care interventions in adults with life-limiting illness.\n\n\nData Extraction and Synthesis\nTwo reviewers independently extracted data. Narrative synthesis was conducted for all trials. Quality of life, symptom burden, and survival were analyzed using random-effects meta-analysis, with estimates of QOL translated to units of the Functional Assessment of Chronic Illness Therapy-palliative care scale (FACIT-Pal) instrument (range, 0-184 [worst-best]; minimal clinically important difference [MCID], 9 points); and symptom burden translated to the Edmonton Symptom Assessment Scale (ESAS) (range, 0-90 [best-worst]; MCID, 5.7 points).\n\n\nMain Outcomes and Measures\nQuality of life, symptom burden, survival, mood, advance care planning, site of death, health care satisfaction, resource utilization, and health care expenditures.\n\n\nResults\nForty-three RCTs provided data on 12 731 patients (mean age, 67 years) and 2479 caregivers. Thirty-five trials used usual care as the control, and 14 took place in the ambulatory setting. In the meta-analysis, palliative care was associated with statistically and clinically significant improvements in patient QOL at the 1- to 3-month follow-up (standardized mean difference, 0.46; 95% CI, 0.08 to 0.83; FACIT-Pal mean difference, 11.36] and symptom burden at the 1- to 3-month follow-up (standardized mean difference, -0.66; 95% CI, -1.25 to -0.07; ESAS mean difference, -10.30). When analyses were limited to trials at low risk of bias (n = 5), the association between palliative care and QOL was attenuated but remained statistically significant (standardized mean difference, 0.20; 95% CI, 0.06 to 0.34; FACIT-Pal mean difference, 4.94), whereas the association with symptom burden was not statistically significant (standardized mean difference, -0.21; 95% CI, -0.42 to 0.00; ESAS mean difference, -3.28). There was no association between palliative care and survival (hazard ratio, 0.90; 95% CI, 0.69 to 1.17). Palliative care was associated consistently with improvements in advance care planning, patient and caregiver satisfaction, and lower health care utilization. Evidence of associations with other outcomes was mixed.\n\n\nConclusions and Relevance\nIn this meta-analysis, palliative care interventions were associated with improvements in patient QOL and symptom burden. Findings for caregiver outcomes were inconsistent. However, many associations were no longer significant when limited to trials at low risk of bias, and there was no significant association between palliative care and survival.", "title": "" }, { "docid": "3509f90848c45ad34ebbd30b9d357c29", "text": "Explaining underlying causes or effects about events is a challenging but valuable task. We define a novel problem of generating explanations of a time series event by (1) searching cause and effect relationships of the time series with textual data and (2) constructing a connecting chain between them to generate an explanation. To detect causal features from text, we propose a novel method based on the Granger causality of time series between features extracted from text such as N-grams, topics, sentiments, and their composition. The generation of the sequence of causal entities requires a commonsense causative knowledge base with efficient reasoning. To ensure good interpretability and appropriate lexical usage we combine symbolic and neural representations, using a neural reasoning algorithm trained on commonsense causal tuples to predict the next cause step. Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.", "title": "" }, { "docid": "93d40aa40a32edab611b6e8c4a652dbb", "text": "In this paper, we present a detailed design of dynamic video segmentation network (DVSNet) for fast and efficient semantic video segmentation. DVSNet consists of two convolutional neural networks: a segmentation network and a flow network. The former generates highly accurate semantic segmentations, but is deeper and slower. The latter is much faster than the former, but its output requires further processing to generate less accurate semantic segmentations. We explore the use of a decision network to adaptively assign different frame regions to different networks based on a metric called expected confidence score. Frame regions with a higher expected confidence score traverse the flow network. Frame regions with a lower expected confidence score have to pass through the segmentation network. We have extensively performed experiments on various configurations of DVSNet, and investigated a number of variants for the proposed decision network. The experimental results show that our DVSNet is able to achieve up to 70.4% mIoU at 19.8 fps on the Cityscape dataset. A high speed version of DVSNet is able to deliver an fps of 30.4 with 63.2% mIoU on the same dataset. DVSNet is also able to reduce up to 95% of the computational workloads.", "title": "" } ]
scidocsrr
b472806a09f6771505be8e7f72361802
Polynomial texture maps
[ { "docid": "5f89fb0df61770e83ca451900b947d43", "text": "We consider the rendering of diffuse objects under distant illumination, as specified by an environment map. Using an analytic expression for the irradiance in terms of spherical harmonic coefficients of the lighting, we show that one needs to compute and use only 9 coefficients, corresponding to the lowest-frequency modes of the illumination, in order to achieve average errors of only 1%. In other words, the irradiance is insensitive to high frequencies in the lighting, and is well approximated using only 9 parameters. In fact, we show that the irradiance can be procedurally represented simply as a quadratic polynomial in the cartesian components of the surface normal, and give explicit formulae. These observations lead to a simple and efficient procedural rendering algorithm amenable to hardware implementation, a prefiltering method up to three orders of magnitude faster than previous techniques, and new representations for lighting design and image-based rendering.", "title": "" } ]
[ { "docid": "8ce3fa727ff12f742727d5b80d8611b9", "text": "Algorithmic approaches endow deep learning systems with implicit bias that helps them generalize even in over-parametrized settings. In this paper, we focus on understanding such a bias induced in learning through dropout, a popular technique to avoid overfitting in deep learning. For single hidden-layer linear neural networks, we show that dropout tends to make the norm of incoming/outgoing weight vectors of all the hidden nodes equal. In addition, we provide a complete characterization of the optimization landscape induced by dropout.", "title": "" }, { "docid": "4d3ed5dd5d4f08c9ddd6c9b8032a77fd", "text": "The purpose of this study was to clarify the efficacy of stress radiography (stress X-P), ultrasonography (US), and magnetic resonance (MR) imaging in the detection of the anterior talofibular ligament (ATFL) injury. Thirty-four patients with ankle sprain were involved. In all patients, Stress X-P, US, MR imaging, and arthroscopy were performed. The arthroscopic results were considered to be the gold standard. The imaging results were compared with the arthroscopic results, and the accuracy calculated. Arthroscopic findings showed ATFL injury in 30 out of 34 cases. The diagnosis of ATFL injury with stress X-P, US, MR imaging were made with an accuracy of 67, 91 and 97%. US and MR imaging demonstrated the same location of the injury as arthroscopy in 63 and 93%. We have clarified the diagnostic value of stress X-P, US, and MR imaging in diagnosis of ATFL injury. We obtained satisfactory results with US and MR imaging.", "title": "" }, { "docid": "86dc000d7e78092a03d03ccd8cb670a0", "text": "Weighted deduction with aggregation is a powerful theoretical formalism that encompasses many NLP algorithms. This paper proposes a declarative specification language, Dyna; gives generalagenda-basedalgorithms for computing weights and gradients; briefly discusses Dyna-to-Dyna program transformations; and shows that a first implementation of a Dyna-to-C++ compiler produces code that is efficient enough for real NLP research, though still several times slower than hand-crafted code.", "title": "" }, { "docid": "3fce18c6e1f909b91f95667a563aa194", "text": "In this paper, we describe an approach to content-based retrieval of medical images from a database, and provide a preliminary demonstration of our approach as applied to retrieval of digital mammograms. Content-based image retrieval (CBIR) refers to the retrieval of images from a database using information derived from the images themselves, rather than solely from accompanying text indices. In the medical-imaging context, the ultimate aim of CBIR is to provide radiologists with a diagnostic aid in the form of a display of relevant past cases, along with proven pathology and other suitable information. CBIR may also be useful as a training tool for medical students and residents. The goal of information retrieval is to recall from a database information that is relevant to the user's query. The most challenging aspect of CBIR is the definition of relevance (similarity), which is used to guide the retrieval machine. In this paper, we pursue a new approach, in which similarity is learned from training examples provided by human observers. Specifically, we explore the use of neural networks and support vector machines to predict the user's notion of similarity. Within this framework we propose using a hierarchal learning approach, which consists of a cascade of a binary classifier and a regression module to optimize retrieval effectiveness and efficiency. We also explore how to incorporate online human interaction to achieve relevance feedback in this learning framework. Our experiments are based on a database consisting of 76 mammograms, all of which contain clustered microcalcifications (MCs). Our goal is to retrieve mammogram images containing similar MC clusters to that in a query. The performance of the retrieval system is evaluated using precision-recall curves computed using a cross-validation procedure. Our experimental results demonstrate that: 1) the learning framework can accurately predict the perceptual similarity reported by human observers, thereby serving as a basis for CBIR; 2) the learning-based framework can significantly outperform a simple distance-based similarity metric; 3) the use of the hierarchical two-stage network can improve retrieval performance; and 4) relevance feedback can be effectively incorporated into this learning framework to achieve improvement in retrieval precision based on online interaction with users; and 5) the retrieved images by the network can have predicting value for the disease condition of the query.", "title": "" }, { "docid": "327d071f71bf39bcd171f85746047a02", "text": "Advances in information and communication technologies have led to the emergence of Internet of Things (IoT). In the healthcare environment, the use of IoT technologies brings convenience to physicians and patients as they can be applied to various medical areas (such as constant real-time monitoring, patient information management, medical emergency management, blood information management, and health management). The radio-frequency identification (RFID) technology is one of the core technologies of IoT deployments in the healthcare environment. To satisfy the various security requirements of RFID technology in IoT, many RFID authentication schemes have been proposed in the past decade. Recently, elliptic curve cryptography (ECC)-based RFID authentication schemes have attracted a lot of attention and have been used in the healthcare environment. In this paper, we discuss the security requirements of RFID authentication schemes, and in particular, we present a review of ECC-based RFID authentication schemes in terms of performance and security. Although most of them cannot satisfy all security requirements and have satisfactory performance, we found that there are three recently proposed ECC-based authentication schemes suitable for the healthcare environment in terms of their performance and security.", "title": "" }, { "docid": "cb0803dfd3763199519ff3f4427e1298", "text": "Motion deblurring is a long standing problem in computer vision and image processing. In most previous approaches, the blurred image is modeled as the convolution of a latent intensity image with a blur kernel. However, for images captured by a real camera, the blur convolution should be applied to scene irradiance instead of image intensity and the blurred results need to be mapped back to image intensity via the camera’s response function (CRF). In this paper, we present a comprehensive study to analyze the effects of CRFs on motion deblurring. We prove that the intensity-based model closely approximates the irradiance model at low frequency regions. However, at high frequency regions such as edges, the intensity-based approximation introduces large errors and directly applying deconvolution on the intensity image will produce strong ringing artifacts even if the blur kernel is invertible. Based on the approximation error analysis, we further develop a dualimage based solution that captures a pair of sharp/blurred images for both CRF estimation and motion deblurring. Experiments on synthetic and real images validate our theories and demonstrate the robustness and accuracy of our approach.", "title": "" }, { "docid": "3731d6d00291c02913fa102292bf3cad", "text": "Real-world applications of text categorization often require a system to deal with tens of thousands of categories defined over a large taxonomy. This paper addresses the problem with respect to a set of popular algorithms in text categorization, including Support Vector Machines, k-nearest neighbor, ridge regression, linear least square fit and logistic regression. By providing a formal analysis of the computational complexity of each classification method, followed by an investigation on the usage of different classifiers in a hierarchical setting of categorization, we show how the scalability of a method depends on the topology of the hierarchy and the category distributions. In addition, we are able to obtain tight bounds for the complexities by using the power law to approximate category distributions over a hierarchy. Experiments with kNN and SVM classifiers on the OHSUMED corpus are reported on, as concrete examples.", "title": "" }, { "docid": "dbd06c81892bc0535e2648ee21cb00b4", "text": "This paper examines the causes of conflict in Burundi and discusses strategies for building peace. The analysis of the complex relationships between distribution and group dynamics reveals that these relationships are reciprocal, implying that distribution and group dynamics are endogenous. The nature of endogenously generated group dynamics determines the type of preferences (altruistic or exclusionist), which in turn determines the type of allocative institutions and policies that prevail in the political and economic system. While unequal distribution of resources may be socially inefficient, it nonetheless can be rational from the perspective of the ruling elite, especially because inequality perpetuates dominance. However, as the unequal distribution of resources generates conflict, maintaining a system based on inequality is difficult because it requires ever increasing investments in repression. It is therefore clear that if the new Burundian leadership is serious about building peace, it must engineer institutions that uproot the legacy of discrimination and promote equal opportunity for social mobility for all members of ethnic groups and regions.", "title": "" }, { "docid": "d7a9465ac031cf7be6f3e74276805f0f", "text": "Half of American workers have a level of education that does not match the level of education required for their job. Of these, a majority are overeducated, i.e. have more schooling than necessary to perform their job (see, e.g., Leuven & Oosterbeek, 2011). In this paper, we use data from the National Longitudinal Survey of Youth 1979 (NLSY79) combined with the pooled 1989-1991 waves of the CPS to provide some of the first evidence regarding the dynamics of overeducation over the life cyle. Shedding light on this question is key to disentangle the role played by labor market frictions versus other factors such as selection on unobservables, compensating differentials or career mobility prospects. Overall, our results suggest that overeducation is a fairly persistent phenomenon, with 79% of workers remaining overeducated after one year. Initial overeducation also has an impact on wages much later in the career, which points to the existence of scarring effects. Finally, we find some evidence of duration dependence, with a 6.5 point decrease in the exit rate from overeducation after having spent five years overeducated. JEL Classification: J24; I21 ∗Duke University †University of North Carolina at Chapel Hill and IZA ‡Duke University and IZA.", "title": "" }, { "docid": "de59e5e248b5df0f92d7fed8c699d68a", "text": "Most modern devices and cryptoalgorithms are vulnerable to a new class of attack called side-channel attack. It analyses physical parameters of the system in order to get secret key. Most spread techniques are simple and differential power attacks with combination of statistical tools. Few studies cover using machine learning methods for pre-processing and key classification tasks. In this paper, we investigate applicability of machine learning methods and their characteristic. Following theoretical results, we examine power traces of AES encryption with Support Vector Machines algorithm and decision trees and provide roadmap for further research.", "title": "" }, { "docid": "c1d8848317256214b76be3adb87a7d49", "text": "We are interested in estimating the average effect of a binary treatment on a scalar outcome. If assignment to the treatment is exogenous or unconfounded, that is, independent of the potential outcomes given covariates, biases associated with simple treatment-control average comparisons can be removed by adjusting for differences in the covariates. Rosenbaum and Rubin (1983) show that adjusting solely for differences between treated and control units in the propensity score removes all biases associated with differences in covariates. Although adjusting for differences in the propensity score removes all the bias, this can come at the expense of efficiency, as shown by Hahn (1998), Heckman, Ichimura and Todd (1998), and Robins, Mark and Newey (1992). We show that weighting by the inverse of a nonparametric estimate of the propensity score, rather than the true propensity score, leads to an efficient estimate of the average treatment effect. We provide intuition for this result by showing that this estimator can be interpreted as an empirical likelihood estimator that efficiently incorporates the information about the propensity score.", "title": "" }, { "docid": "b395aa3ae750ddfd508877c30bae3a38", "text": "This paper presents a technology review of voltage-source-converter topologies for industrial medium-voltage drives. In this highly active area, different converter topologies and circuits have found their application in the market. This paper covers the high-power voltage-source inverter and the most used multilevel-inverter topologies, including the neutral-point-clamped, cascaded H-bridge, and flying-capacitor converters. This paper presents the operating principle of each topology and a review of the most relevant modulation methods, focused mainly on those used by industry. In addition, the latest advances and future trends of the technology are discussed. It is concluded that the topology and modulation-method selection are closely related to each particular application, leaving a space on the market for all the different solutions, depending on their unique features and limitations like power or voltage level, dynamic performance, reliability, costs, and other technical specifications.", "title": "" }, { "docid": "48432393e1c320c051b59427db0620b5", "text": "The design of removable partial dentures (RPDs) is an important factor for good prognostication. The purpose of this study was to clarify the effectiveness of denture designs and to clarify the component that had high rates of failure and complications. A total of 91 RPDs, worn by 65 patients for 2-10 years, were assessed. Removable partial dentures were classified into four groups: telescopic dentures (TDs), ordinary clasp dentures (ODs), modified clasp dentures (MDs) and combination dentures (CDs). The failure rates of abutment teeth were the highest and those of retainers were the second highest. The failure rates of connectors were generally low, but they increased suddenly after 6 years. Complication and failure rates of denture bases and artificial teeth were generally low. Complication and failure rates of TDs were high at abutment teeth and low level at retainers. Complication and failure rates of ODs were high at retainers.", "title": "" }, { "docid": "660e8d6847d06970e37455b60198c6b6", "text": "Usually, if researchers want to understand research status of any field, they need to browse a great number of related academic literatures. Luckily, in order to work more efficiently, automatic documents summarization can be applied for taking a glance at specific scientific topics. In this paper, we focus on summary generation of citation content. An automatic tool named CitationAS is built, whose three core components are clustering algorithms, label generation and important sentences extraction methods. In experiments, we use bisecting Kmeans, Lingo and STC to cluster retrieved citation content. Then Word2Vec, WordNet and combination of them are applied to generate cluster label. Next, we employ two methods, TF-IDF and MMR, to extract important sentences, which are used to generate summaries. Finally, we adopt gold standard to evaluate summaries obtained from CitationAS. According to evaluations, we find the best label generation method for each clustering algorithm. We also discover that combination of Word2Vec and WordNet doesn’t have good performance compared with using them separately on three clustering algorithms. Combination of Ling algorithm, Word2Vec label generation method and TF-IDF sentences extraction approach will acquire the highest summary quality. Conference Topic Text mining and information extraction", "title": "" }, { "docid": "1c117c63455c2b674798af0e25e3947c", "text": "We are studying the manufacturing performance of semiconductor wafer fabrication plants in the US, Asia, and Europe. There are great similarities in production equipment, manufacturing processes, and products produced at semiconductor fabs around the world. However, detailed comparisons over multi-year intervals show that important quantitative indicators of productivity, including defect density (yield), major equipment production rates, wafer throughput time, and effective new process introduction to manufacturing, vary by factors of 3 to as much as 5 across an international sample of 28 fabs. We conduct on-site observations, and interviews with manufacturing personnel at all levels from operator to general manager, to better understand reasons for the observed wide variations in performance. We have identified important factors in the areas of information systems, organizational practices, process and technology improvements, and production control that correlate strongly with high productivity. Optimum manufacturing strategy is different for commodity products, high-value proprietary products, and foundry business.", "title": "" }, { "docid": "570e03101ae116e2f20ab6337061ec3f", "text": "This study explored the potential for using seed cake from hemp (Cannabis sativa L.) as a protein feed for dairy cows. The aim was to evaluate the effects of increasing the proportion of hempseed cake (HC) in the diet on milk production and milk composition. Forty Swedish Red dairy cows were involved in a 5-week dose-response feeding trial. The cows were allocated randomly to one of four experimental diets containing on average 494 g/kg of grass silage and 506 g/kg of concentrate on a dry matter (DM) basis. Diets containing 0 g (HC0), 143 g (HC14), 233 g (HC23) or 318 g (HC32) HC/kg DM were achieved by replacing an increasing proportion of compound pellets with cold-pressed HC. Increasing the proportion of HC resulted in dietary crude protein (CP) concentrations ranging from 126 for HC0 to 195 g CP/kg DM for HC32. Further effects on the composition of the diet with increasing proportions of HC were higher fat and NDF and lower starch concentrations. There were no linear or quadratic effects on DM intake, but increasing the proportion of HC in the diet resulted in linear increases in fat and NDF intake, as well as CP intake (P < 0.001), and a linear decrease in starch intake (P < 0.001). The proportion of HC had significant quadratic effects on the yields of milk, energy-corrected milk (ECM) and milk protein, fat and lactose. The curvilinear response of all yield parameters indicated maximum production from cows fed diet HC14. Increasing the proportion of HC resulted in linear decreases in both milk protein and milk fat concentration (P = 0.005 and P = 0.017, respectively), a linear increase in milk urea (P < 0.001), and a linear decrease in CP efficiency (milk protein/CP intake; P < 0.001). In conclusion, the HC14 diet, corresponding to a dietary CP concentration of 157 g/kg DM, resulted in the maximum yields of milk and ECM by dairy cows in this study.", "title": "" }, { "docid": "b206a5f5459924381ef6c46f692c7052", "text": "The Konstanz Information Miner is a modular environment, which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables simple integration of new algorithms and tools as well as data manipulation or visualization methods in the form of new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture, briey sketch how new nodes can be incorporated, and highlight some of the new features of version 2.0.", "title": "" }, { "docid": "f83ca1c2732011e9a661f8cf9a0516ac", "text": "We provide a characterization of pseudoentropy in terms of hardness of sampling: Let (X,B) be jointly distributed random variables such that B takes values in a polynomial-sized set. We show that B is computationally indistinguishable from a random variable of higher Shannon entropy given X if and only if there is no probabilistic polynomial-time S such that (X,S(X)) has small KL divergence from (X,B). This can be viewed as an analogue of the Impagliazzo Hardcore Theorem (FOCS '95) for Shannon entropy (rather than min-entropy).\n Using this characterization, we show that if f is a one-way function, then (f(Un),Un) has \"next-bit pseudoentropy\" at least n+log n, establishing a conjecture of Haitner, Reingold, and Vadhan (STOC '10). Plugging this into the construction of Haitner et al., this yields a simpler construction of pseudorandom generators from one-way functions. In particular, the construction only performs hashing once, and only needs the hash functions that are randomness extractors (e.g. universal hash functions) rather than needing them to support \"local list-decoding\" (as in the Goldreich--Levin hardcore predicate, STOC '89).\n With an additional idea, we also show how to improve the seed length of the pseudorandom generator to ~{O}(n3), compared to O(n4) in the construction of Haitner et al.", "title": "" }, { "docid": "3b06bc2d72e0ae7fa75873ed70e23fc3", "text": "Transaction traces analysis is a key utility for marketing, trend monitoring, and fraud detection purposes. However, they can also be used for designing and verification of contextual risk management systems for card-present transactions. In this paper, we presented a novel approach to collect detailed transaction traces directly from payment terminal. Thanks to that, it is possible to analyze each transaction step precisely, including its frequency and timing. We also demonstrated our approach to analyze such data based on real-life experiment. Finally, we concluded this paper with important findings for designers of such a system.", "title": "" }, { "docid": "d37f648a06d6418a0e816ce000056136", "text": "Recently it has been shown that policy-gradient methods for reinforcement learning can be utilized to train deep end-to-end systems directly on non-differentiable metrics for the task at hand. In this paper we consider the problem of optimizing image captioning systems using reinforcement learning, and show that by carefully optimizing our systems using the test metrics of the MSCOCO task, significant gains in performance can be realized. Our systems are built using a new optimization approach that we call self-critical sequence training (SCST). SCST is a form of the popular REINFORCE algorithm that, rather than estimating a baseline to normalize the rewards and reduce variance, utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. Using this approach, estimating the reward signal (as actor-critic methods must do) and estimating normalization (as REINFORCE algorithms typically do) is avoided, while at the same time harmonizing the model with respect to its test-time inference procedure. Empirically we find that directly optimizing the CIDEr metric with SCST and greedy decoding at test-time is highly effective. Our results on the MSCOCO evaluation sever establish a new state-of-the-art on the task, improving the best result in terms of CIDEr from 104.9 to 114.7.", "title": "" } ]
scidocsrr
a97ec1a51d722e5244bfbe62b0e94e28
A guide to convolution arithmetic for deep learning
[ { "docid": "2a56702663e6e52a40052a5f9b79a243", "text": "Many successful models for scene or object recognition transform low-level descriptors (such as Gabor filter responses, or SIFT descriptors) into richer representations of intermediate complexity. This process can often be broken down into two steps: (1) a coding step, which performs a pointwise transformation of the descriptors into a representation better adapted to the task, and (2) a pooling step, which summarizes the coded features over larger neighborhoods. Several combinations of coding and pooling schemes have been proposed in the literature. The goal of this paper is threefold. We seek to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules (hard and soft vector quantization, sparse coding) and pooling schemes (by taking the average, or the maximum), which obtains state-of-the-art performance or better on several recognition benchmarks. We show how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding. We provide theoretical and empirical insight into the remarkable performance of max pooling. By teasing apart components shared by modern mid-level feature extractors, our approach aims to facilitate the design of better recognition architectures.", "title": "" }, { "docid": "4f75a5ef8e4fdbff79c78a4bee8e1a4c", "text": "Recently, convolutional neural networks (CNN) have been successfully applied to view synthesis problems. However, such CNN-based methods can suffer from lack of texture details, shape distortions, or high computational complexity. In this paper, we propose a novel CNN architecture for view synthesis called Deep View Morphing that does not suffer from these issues. To synthesize a middle view of two input images, a rectification network first rectifies the two input images. An encoder-decoder network then generates dense correspondences between the rectified images and blending masks to predict the visibility of pixels of the rectified images in the middle view. A view morphing network finally synthesizes the middle view using the dense correspondences and blending masks. We experimentally show the proposed method significantly outperforms the state-of-the-art CNN-based view synthesis method.", "title": "" } ]
[ { "docid": "012b42c01cebf0840a429ab0e7db2914", "text": "Silicon single-photon avalanche diodes (SPADs) are nowadays a solid-state alternative to photomultiplier tubes (PMTs) in single-photon counting (SPC) and time-correlated single-photon counting (TCSPC) over the visible spectral range up to 1-mum wavelength. SPADs implemented in planar technology compatible with CMOS circuits offer typical advantages of microelectronic devices (small size, ruggedness, low voltage, low power, etc.). Furthermore, they have inherently higher photon detection efficiency, since they do not rely on electron emission in vacuum from a photocathode as do PMTs, but instead on the internal photoelectric effect. However, PMTs offer much wider sensitive area, which greatly simplifies the design of optical systems; they also attain remarkable performance at high counting rate, and offer picosecond timing resolution with microchannel plate models. In order to make SPAD detectors more competitive in a broader range of SPC and TCSPC applications, it is necessary to face several issues in the semiconductor device design and technology. Such issues will be discussed in the context of the two possible approaches to such a challenge: employing a standard industrial high-voltage CMOS technology or developing a dedicated CMOS-compatible technology. Advances recently attained in the development of SPAD detectors will be outlined and discussed with reference to both single-element detectors and integrated detector arrays.", "title": "" }, { "docid": "2b52e1c05bf02919501c2eb50e6cf457", "text": "The operational challenges posed in enterprise networks present an appealing opportunity for automated orchestration by way of Software-Defined Networking (SDN). The primary challenge to SDN adoption in the enterprise is the deployment problem: How to deploy and operate a network consisting of both legacy and SDN switches, while benefiting from simplified management and enhanced flexibility of SDN. This paper presents the design and implementation of Panopticon, an architecture for operating networks that combine legacy and SDN switches. Panopticon exposes an abstraction of a logical SDN in a partially upgraded legacy network, where SDN benefits can extend over the entire network. We demonstrate the feasibility and evaluate the efficiency of our approach through both testbed experiments with hardware switches and through simulation on real enterprise campus network topologies entailing over 1500 devices. Our results suggest that when as few as 10% of distribution switches support SDN, most of an enterprise network can be operated as a single SDN while meeting key resource constraints.", "title": "" }, { "docid": "674339928a16b372fb13395f920561e5", "text": "High-speed, high-efficiency photodetectors play an important role in optical communication links that are increasingly being used in data centres to handle higher volumes of data traffic and higher bandwidths, as big data and cloud computing continue to grow exponentially. Monolithic integration of optical components with signal-processing electronics on a single silicon chip is of paramount importance in the drive to reduce cost and improve performance. We report the first demonstration of microand nanoscale holes enabling light trapping in a silicon photodiode, which exhibits an ultrafast impulse response (full-width at half-maximum) of 30 ps and a high efficiency of more than 50%, for use in data-centre optical communications. The photodiode uses microand nanostructured holes to enhance, by an order of magnitude, the absorption efficiency of a thin intrinsic layer of less than 2 μm thickness and is designed for a data rate of 20 gigabits per second or higher at a wavelength of 850 nm. Further optimization can improve the efficiency to more than 70%.", "title": "" }, { "docid": "0db39aada8be41ef1172248e0a80ad53", "text": "Urinary retention is relatively rare in infants,especially in girls. Imperforate hymen is the most frequent congenital malformation of the female genital tract and is usually asymptomatic until puberty. Mucocolpos with an abdominal mass in neonatal age is extremely rare. We report a case of a 20-day-old newborn girl with acute urinary retention due to isolated imperforate hymen and mucocolpos.", "title": "" }, { "docid": "5ce6bac4ec1f916c1ebab9da09816c0e", "text": "High-performance parallel computing architectures are increasingly based on multi-core processors. While current commercially available processors are at 8 and 16 cores, technological and power constraints are limiting the performance growth of the cores and are resulting in architectures with much higher core counts, such as the experimental many-core Intel Single-chip Cloud Computer (SCC) platform. These trends are presenting new sets of challenges to HPC applications including programming complexity and the need for extreme energy efficiency.\n In this paper, we first investigate the power behavior of scientific Partitioned Global Address Space (PGAS) application kernels on the SCC platform, and explore opportunities and challenges for power management within the PGAS framework. Results obtained via empirical evaluation of Unified Parallel C (UPC) applications on the SCC platform under different constraints, show that, for specific operations, the potential for energy savings in PGAS is large; and power/performance trade-offs can be effectively managed using a cross-layer approach. We investigate cross-layer power management using PGAS language extensions and runtime mechanisms that manipulate power/performance tradeoffs. Specifically, we present the design, implementation and evaluation of such a middleware for application-aware cross-layer power management of UPC applications on the SCC platform. Finally, based on our observations, we provide a set of insights that can be used to support similar power management for PGAS applications on other many-core platforms.", "title": "" }, { "docid": "8d292592202c948c439f055ca5df9d56", "text": "This paper provides an overview of the current state of the art in persuasive systems design. All peer-reviewed full papers published at the first three International Conferences on Persuasive Technology were analyzed employing a literature review framework. Results from this analysis are discussed and directions for future research are suggested. Most research papers so far have been experimental. Five out of six of these papers (84.4%) have addressed behavioral change rather than an attitude change. Tailoring, tunneling, reduction and social comparison have been the most studied methods for persuasion. Quite, surprisingly ethical considerations have remained largely unaddressed in these papers. In general, many of the research papers seem to describe the investigated persuasive systems in a relatively vague manner leaving room for some improvement.", "title": "" }, { "docid": "477af6326b8d51afcb15ef6107fe3cd7", "text": "BACKGROUND\nThe few studies that have investigated the relationship between mobile phone use and sleep have mainly been conducted among children and adolescents. In adults, very little is known about mobile phone usage in bed our after lights out. This cross-sectional study set out to examine the association between bedtime mobile phone use and sleep among adults.\n\n\nMETHODS\nA sample of 844 Flemish adults (18-94 years old) participated in a survey about electronic media use and sleep habits. Self-reported sleep quality, daytime fatigue and insomnia were measured using the Pittsburgh Sleep Quality Index (PSQI), the Fatigue Assessment Scale (FAS) and the Bergen Insomnia Scale (BIS), respectively. Data were analyzed using hierarchical and multinomial regression analyses.\n\n\nRESULTS\nHalf of the respondents owned a smartphone, and six out of ten took their mobile phone with them to the bedroom. Sending/receiving text messages and/or phone calls after lights out significantly predicted respondents' scores on the PSQI, particularly longer sleep latency, worse sleep efficiency, more sleep disturbance and more daytime dysfunction. Bedtime mobile phone use predicted respondents' later self-reported rise time, higher insomnia score and increased fatigue. Age significantly moderated the relationship between bedtime mobile phone use and fatigue, rise time, and sleep duration. An increase in bedtime mobile phone use was associated with more fatigue and later rise times among younger respondents (≤ 41.5 years old and ≤ 40.8 years old respectively); but it was related to an earlier rise time and shorter sleep duration among older respondents (≥ 60.15 years old and ≥ 66.4 years old respectively).\n\n\nCONCLUSION\nFindings suggest that bedtime mobile phone use is negatively related to sleep outcomes in adults, too. It warrants continued scholarly attention as the functionalities of mobile phones evolve rapidly and exponentially.", "title": "" }, { "docid": "34557bc145ccd6d83edfc80da088f690", "text": "This thesis is dedicated to my mother, who taught me that success is not the key to happiness. Happiness is the key to success. If we love what we are doing, we will be successful. This thesis is dedicated to my father, who taught me that luck is not something that is given to us at random and should be waited for. Luck is the sense to recognize an opportunity and the ability to take advantage of it. iii ACKNOWLEDGEMENTS I would like to thank my thesis committee –", "title": "" }, { "docid": "533ff26e626de25ca336d66f9bc5c635", "text": "Neural information processing models largely assume that the patterns for training a neural network are sufficient. Otherwise, there must exist a non-negligible error between the real function and the estimated function from a trained network. To reduce the error, in this paper, we suggest a diffusion-neural-network (DNN) to learn from a small sample consisting of only a few patterns. A DNN with more nodes in the input and layers is trained by using the deriving patterns instead of original patterns. In this paper, we give an example to show how to construct a DNN for recognizing a non-linear function. In our case, the DNN’s error is less than the error of the conventional BP network, about 48%. To substantiate the special case arguments, we also study other two non-linear functions with simulation technology. The results show that the DNN model is very effective in the case where the target function has a strong non-linearity or a given sample is very small. 2003 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "5bdf4585df04c00ebcf00ce94a86ab38", "text": "High-voltage pulse-generators can be used effectively for bacterial decontamination in water treatment applications. Applying a pulsed electric field to the infected water sample guarantees killing of harmful germs and bacteria. In this paper, a modular high-voltage pulse-generator with sequential charging is proposed for water treatment via underwater pulsed streamer corona discharge. The proposed generator consists of series-connected modules similar to an arm of a modular multilevel converter. The modules' capacitors are charged sequentially from a relatively low-voltage dc supply, then they are connected in series and discharged into the load. Two configurations are proposed in this paper, one for low repetitive pulse rate applications, and the other for high repetitive pulse rates. In the first topology, the equivalent resistance of the infected water sample is used as a charging resistance for the generator's capacitors during the charging process. While in the second topology, the water resistance is bypassed during the charging process, and an external charging resistance with proper value is used instead. In this paper, detailed designs for the proposed pulse-generators are presented and validated by simulation results using MATLAB. A scaled down experimental setup has been built to show the viability of the proposed concept.", "title": "" }, { "docid": "e28b0ab1bedd60ba83b8a575431ad549", "text": "The Decision Model and Notation (DMN) is a standard notation to specify decision logic in business applications. A central construct in DMN is a decision table. The rising use of DMN decision tables to capture and to automate everyday business decisions fuels the need to support analysis tasks on decision tables. This paper presents an opensource DMN editor to tackle three analysis tasks: detection of overlapping rules, detection of missing rules and simplification of decision tables via rule merging. The tool has been tested on large decision tables derived from a credit lending data-set.", "title": "" }, { "docid": "92ac3bfdcf5e554152c4ce2e26b77315", "text": "How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.", "title": "" }, { "docid": "e3c436c0feaf37f260067792e196a59a", "text": "For about 10 years, detecting the presence of a secret message hidden in an image was performed with an Ensemble Classifier trained with Rich features. In recent years, studies such as Xu et al. have indicated that well-designed convolutional Neural Networks (CNN) can achieve comparable performance to the two-step machine learning approaches. In this paper, we propose a CNN that outperforms the state-of-the-art in terms of error probability. The proposition is in the continuity of what has been recently proposed and it is a clever fusion of important bricks used in various papers. Among the essential parts of the CNN, one can cite the use of a pre-processing filterbank and a Truncation activation function, five convolutional layers with a Batch Normalization associated with a Scale Layer, as well as the use of a sufficiently sized fully connected section. An augmented database has also been used to improve the training of the CNN. Our CNN was experimentally evaluated against S-UNIWARD and WOW embedding algorithms and its performances were compared with those of three other methods: an Ensemble Classifier plus a Rich Model, and two other CNN steganalyzers.", "title": "" }, { "docid": "f4b7b9747c0ba994b60326a568aa4173", "text": "Unmanned Aerial Vehicles (UAV) facilitate the development of Internet of Things (IoT) ecosystems for smart city and smart environment applications. This paper proposes the adoption of Edge and Fog computing principles to the UAV based forest fire detection application domain through a hierarchical architecture. This three-layer ecosystem combines the powerful resources of cloud computing, the rich resources of fog computing and the sensing capabilities of the UAVs. These layers efficiently cooperate to address the key challenges imposed by the early forest fire detection use case. Initial experimental evaluations measuring crucial performance metrics indicate that critical resources, such as CPU/RAM, battery life and network resources, can be efficiently managed and dynamically allocated by the proposed approach.", "title": "" }, { "docid": "412b616f4fcb9399c8220c542ecac83e", "text": "Image cropping aims at improving the aesthetic quality of images by adjusting their composition. Most weakly supervised cropping methods (without bounding box supervision) rely on the sliding window mechanism. The sliding window mechanism requires fixed aspect ratios and limits the cropping region with arbitrary size. Moreover, the sliding window method usually produces tens of thousands of windows on the input image which is very time-consuming. Motivated by these challenges, we firstly formulate the aesthetic image cropping as a sequential decision-making process and propose a weakly supervised Aesthetics Aware Reinforcement Learning (A2-RL) framework to address this problem. Particularly, the proposed method develops an aesthetics aware reward function which especially benefits image cropping. Similar to human's decision making, we use a comprehensive state representation including both the current observation and the historical experience. We train the agent using the actor-critic architecture in an end-to-end manner. The agent is evaluated on several popular unseen cropping datasets. Experiment results show that our method achieves the state-of-the-art performance with much fewer candidate windows and much less time compared with previous weakly supervised methods.", "title": "" }, { "docid": "b9e9f33e177b7b116629e85dd3370d17", "text": "Two studies are reported in which younger and older monolingual and bilingual adults performed executive function tasks. In Study 1, 130 participants performed a Stroop task and bilinguals in both age groups showed less interference than monolinguals with a greater benefit for older adults. In Study 2, 108 participants performed a complex working memory task based on verbal or nonverbal stimuli. Bilinguals showed less interference than monolinguals, with a larger bilingual advantage in the older adult group and in the nonverbal task. Together, these results show that bilingual advantages in executive function depend on characteristics of the participants and features of the tasks, with larger effects found for older than younger adults and for complex tasks using nonverbal material.", "title": "" }, { "docid": "9b71c5bd7314e793757776c6e54f03bb", "text": "This paper evaluates the application of Bronfenbrenner’s bioecological theory as it is represented in empirical work on families and their relationships. We describe the ‘‘mature’’ form of bioecological theory of the mid-1990s and beyond, with its focus on proximal processes at the center of the Process-Person-Context-Time model. We then examine 25 papers published since 2001, all explicitly described as being based on Bronfenbrenner’s theory, and show that all but 4 rely on outmoded versions of the theory, resulting in conceptual confusion and inadequate testing of the theory.", "title": "" }, { "docid": "ae218abd859370a093faf83d6d81599d", "text": "In this letter, we present an autofocus routine for backprojection imagery from spotlight-mode synthetic aperture radar data. The approach is based on maximizing image sharpness and supports the flexible collection and imaging geometries of BP, including wide-angle apertures and the ability to image directly onto a digital elevation map. While image-quality-based autofocus approaches can be computationally intensive, in the backprojection setting, we demonstrate a natural geometric interpretation that allows for optimal single-pulse phase corrections to be derived in closed form as the solution of a quartic polynomial. The approach is applicable to focusing standard backprojection imagery, as well as providing incremental focusing in sequential imaging applications based on autoregressive backprojection. An example demonstrates the efficacy of the approach applied to real data for a wide-aperture backprojection image.", "title": "" }, { "docid": "e168be167244e5788ec76e84b08311ca", "text": "Recently, speechrecognitionsystemsbasedon articulatory featuressuchas“voicing” or thepositionof lips andtonguehave gainedinterest,becausethey promiseadvantageswith respectto robustnessandpermitnew adaptationmethodsto compensatefor channel,noise,andspeakervariability. Theseapproachesarealso interestingfrom a generalpoint of view, becausetheir modelsuse phonological andphoneticconcepts,which allow for a richerdescriptionof a speechactthanthesequenceof HMM-states,which is theprevalentASRarchitecturetoday. In thiswork, wepresenta multi-streamarchitecture,in which CD-HMMS aresupportedby detectorsfor articulatoryfeatures,usinga linear combinationof log-likelihood scores.Thismulti-streamapproachresultsin a15% reductionof WER on a readBroadcast-Ne ws (BN) taskand improvesperformanceon a spontaneouschedulingtask(ESST)by 7%. Theproposedarchitecturepotentiall y allows for new speaker andchanneladaptationschemes, includingstreamasynchronicity .", "title": "" } ]
scidocsrr
245988ae1d9ae4110048135ec0581fb2
Multimethod Longitudinal HIV Drug Resistance Analysis in Antiretroviral-Therapy-Naive Patients.
[ { "docid": "7fe1cea4990acabf7bc3c199d3c071ce", "text": "Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net.", "title": "" } ]
[ { "docid": "1390f0c41895ecabbb16c54684b88ca1", "text": "Deep learning has proven to be a powerful tool for computer vision and has seen widespread adoption for numerous tasks. However, deep learning algorithms are known to be vulnerable to adversarial examples. These adversarial inputs are created such that, when provided to a deep learning algorithm, they are very likely to be mislabeled. This can be problematic when deep learning is used to assist in safety critical decisions. Recent research has shown that classifiers can be attacked by physical adversarial examples under various physical conditions. Given the fact that stateof-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples. In this note, we briefly show both static and dynamic test results. We design an algorithm that produces physical adversarial inputs, which can fool the YOLO object detector and can also attack Faster-RCNN with relatively high success rate based on transferability. Furthermore, our algorithm can compress the size of the adversarial inputs to stickers that, when attached to the targeted object, result in the detector either mislabeling or not detecting the object a high percentage of the time. This note provides a small set of results. Our upcoming paper will contain a thorough evaluation on other object detectors, and will present the algorithm.", "title": "" }, { "docid": "d6dba7a89bc123bc9bb616df6faee2bc", "text": "Continuing interest in digital games indicated that it would be useful to update [Authors’, 2012] systematic literature review of empirical evidence about the positive impacts an d outcomes of games. Since a large number of papers was identified in th e period from 2009 to 2014, the current review focused on 143 papers that provided higher quality evidence about the positive outcomes of games. [Authors’] multidimensional analysis of games and t heir outcomes provided a useful framework for organising the varied research in this area. The mo st frequently occurring outcome reported for games for learning was knowledge acquisition, while entertain me t games addressed a broader range of affective, behaviour change, perceptual and cognitive and phys iological outcomes. Games for learning were found across varied topics with STEM subjects and health the most popular. Future research on digital games would benefit from a systematic programme of experi m ntal work, examining in detail which game features are most effective in promoting engagement and supporting learning.", "title": "" }, { "docid": "7064d73864a64e2b76827e3252390659", "text": "Abstmct-In his original paper on the subject, Shannon found upper and lower bounds for the entropy of printed English based on the number of trials required for a subject to guess subsequent symbols in a given text. The guessing approach precludes asymptotic consistency of either the upper or lower bounds except for degenerate ergodic processes. Shannon’s technique of guessing the next symbol is altered by having the subject place sequential bets on the next symbol of text. lf S,, denotes the subject’s capital after n bets at 27 for 1 odds, and lf it is assumed that the subject hnows the underlying prpbabillty distribution for the process X, then the entropy estimate ls H,(X) =(l -(l/n) log,, S,) log, 27 bits/symbol. If the subject does npt hnow the true probabllty distribution for the stochastic process, then Z&(X! ls an asymptotic upper bound for the true entropy. ff X is stationary, EH,,(X)+H(X), H(X) bell the true entropy of the process. Moreovzr, lf X is ergodic, then by the SLOW McMilhm-Brebnan theorem H,,(X)+H(X) with probability one. Preliminary indications are that English text has au entropy of approximately 1.3 bits/symbol, which agrees well with Shannon’s estimate.", "title": "" }, { "docid": "ac1f2a1a96ab424d9b69276efd4f1ed4", "text": "This paper describes various systems from the University of Minnesota, Duluth that participated in the CLPsych 2015 shared task. These systems learned decision lists based on lexical features found in training data. These systems typically had average precision in the range of .70 – .76, whereas a random baseline attained .47 – .49.", "title": "" }, { "docid": "cf131167592f02790a1b4e38ed3b5375", "text": "Monocular 3D facial shape reconstruction from a single 2D facial image has been an active research area due to its wide applications. Inspired by the success of deep neural networks (DNN), we propose a DNN-based approach for End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different from recent works that reconstruct and refine the 3D face in an iterative manner using both an RGB image and an initial 3D facial shape rendering, our DNN model is end-to-end, and thus the complicated 3D rendering process can be avoided. Moreover, we integrate in the DNN architecture two components, namely a multi-task loss function and a fusion convolutional neural network (CNN) to improve facial expression reconstruction. With the multi-task loss function, 3D face reconstruction is divided into neutral 3D facial shape reconstruction and expressive 3D facial shape reconstruction. The neutral 3D facial shape is class-specific. Therefore, higher layer features are useful. In comparison, the expressive 3D facial shape favors lower or intermediate layer features. With the fusion-CNN, features from different intermediate layers are fused and transformed for predicting the 3D expressive facial shape. Through extensive experiments, we demonstrate the superiority of our end-to-end framework in improving the accuracy of 3D face reconstruction.", "title": "" }, { "docid": "7cd091555dd870cc1a71a4318bb5ff8d", "text": "This paper presents the design and simulation of a wideband, medium gain, light weight, wide bandwidth pyramidal horn antenna feed for microwave applications. The horn was designed using approximation method to calculate the gain in mat lab and simulated using CST microwave studio. The proposed antenna operates within 1-2 GHz (L-band). The horn is supported by a rectangular wave guide. It is linearly polarized and shows wide bandwidth with a gain of 15.3dB. The horn is excited with the monopole which is loaded with various top hat loading such as rectangular disc, circular disc, annular disc, L-type, T-type, Cone shape, U-shaped plates etc. and checked their performances for return loss as well as bandwidth. The circular disc and annular ring gives the low return loss and wide bandwidth as well as low VSWR. The annular ring gave good VSWR and return loss compared to the circular disc. The far field radiation pattern is obtained as well as Efield & H-field analysis for L-band pyramidal horn has been observed, simulated and optimized using CST Microwave Studio. The simulation results show that the pyramidal horn structure exhibits low VSWR as well as good radiation pattern over L-band.", "title": "" }, { "docid": "55ada092fd628aead0fd64d20eff7b69", "text": "BER estimation from measured EVM values is shown experimentally for QPSK and 16QAM optical signals with 28 GBd. Various impairments, such as gain imbalance, quadrature error and timing skew, are introduced into the transmitted signal in order to evaluate the robustness of the method. The EVM was measured using two different real-time sampling systems and the EVM measurement accuracy is discussed.", "title": "" }, { "docid": "6d552edc0d60470ce942b9d57b6341e3", "text": "A rich element of cooperative games are mechanics that communicate. Unlike automated awareness cues and synchronous verbal communication, cooperative communication mechanics enable players to share information and direct action by engaging with game systems. These include both explicitly communicative mechanics, such as built-in pings that direct teammates' attention to specific locations, and emergent communicative mechanics, where players develop their own conventions about the meaning of in-game activities, like jumping to get attention. We use a grounded theory approach with 40 digital games to identify and classify the types of cooperative communication mechanics game designers might use to enable cooperative play. We provide details on the classification scheme and offer a discussion on the implications of cooperative communication mechanics.", "title": "" }, { "docid": "aa5daa83656a2265dc27ec6ee5e3c1cb", "text": "Firms traditionally rely on interviews and focus groups to identify customer needs for marketing strategy and product development. User-generated content (UGC) is a promising alternative source for identifying customer needs. However, established methods are neither efficient nor effective for large UGC corpora because much content is non-informative or repetitive. We propose a machine-learning approach to facilitate qualitative analysis by selecting content for efficient review. We use a convolutional neural network to filter out non-informative content and cluster dense sentence embeddings to avoid sampling repetitive content. We further address two key questions: Are UGCbased customer needs comparable to interview-based customer needs? Do the machine-learning methods improve customer-need identification? These comparisons are enabled by a custom dataset of customer needs for oral care products identified by professional analysts using industry-standard experiential interviews. The analysts also coded 12,000 UGC sentences to identify which previously identified customer needs and/or new customer needs were articulated in each sentence. We show that (1) UGC is at least as valuable as a source of customer needs for product development, likely morevaluable, than conventional methods, and (2) machine-learning methods improve efficiency of identifying customer needs from UGC (unique customer needs per unit of professional services cost).", "title": "" }, { "docid": "4d5119db64e4e0a31064bd22b47e2534", "text": "Reliability and scalability of an application is dependent on how its application state is managed. To run applications at massive scale requires one to operate datastores that can scale to operate seamlessly across thousands of servers and can deal with various failure modes such as server failures, datacenter failures and network partitions. The goal of Amazon DynamoDB is to eliminate this complexity and operational overhead for our customers by offering a seamlessly scalable database service. In this talk, I will talk about how developers can build applications on DynamoDB without having to deal with the complexity of operating a large scale database.", "title": "" }, { "docid": "6d3410de121ffe037eafd5f30daa7252", "text": "One of the more important issues in the development of larger scale complex systems (product development period of two or more years) is accommodating changes to requirements. Requirements gathered for larger scale systems evolve during lengthy development periods due to changes in software and business environments, new user needs and technological advancements. Agile methods, which focus on accommodating change even late in the development lifecycle, can be adopted for the development of larger scale systems. However, as currently applied, these practices are not always suitable for the development of such systems. We propose a soft-structured framework combining the principles of agile and conventional software development that addresses the issue of rapidly changing requirements for larger scale systems. The framework consists of two parts: (1) a soft-structured requirements gathering approach that reflects the agile philosophy i.e., the Agile Requirements Generation Model and (2) a tailored development process that can be applied to either small or larger scale systems.", "title": "" }, { "docid": "ce99ce3fb3860e140164e7971291f0fa", "text": "We describe the development and psychometric characteristics of the Generalized Workplace Harassment Questionnaire (GWHQ), a 29-item instrument developed to assess harassing experiences at work in five conceptual domains: verbal aggression, disrespect, isolation/exclusion, threats/bribes, and physical aggression. Over 1700 current and former university employees completed the GWHQ at three time points. Factor analytic results at each wave of data suggested a five-factor solution that did not correspond to the original five conceptual factors. We suggest a revised scoring scheme for the GWHQ utilizing four of the empirically extracted factors: covert hostility, verbal hostility, manipulation, and physical hostility. Covert hostility was the most frequently experienced type of harassment, followed by verbal hostility, manipulation, and physical hostility. Verbal hostility, covert hostility, and manipulation were found to be significant predictors of psychological distress.", "title": "" }, { "docid": "e06646b7d2bd6ee83c4d557f4215e143", "text": "Deep generative models have been praised for their ability to learn smooth latent representation of images, text, and audio, which can then be used to generate new, plausible data. However, current generative models are unable to work with graphs due to their unique characteristics—their underlying structure is not Euclidean or grid-like, they remain isomorphic under permutation of the nodes labels, and they come with a different number of nodes and edges. In this paper, we propose NeVAE, a novel variational autoencoder for graphs, whose encoder and decoder are specially designed to account for the above properties by means of several technical innovations. In addition, by using masking, the decoder is able to guarantee a set of local structural and functional properties in the generated graphs. Experiments reveal that our model is able to learn and mimic the generative process of several well-known random graph models and can be used to discover new molecules more effectively than several state of the art methods. Moreover, by utilizing Bayesian optimization over the continuous latent representation of molecules our model finds, we can also identify molecules that maximize certain desirable properties more effectively than alternatives.", "title": "" }, { "docid": "ddb0a3bc0a9367a592403d0fc0cec0a5", "text": "Fluorescence microscopy is a powerful quantitative tool for exploring regulatory networks in single cells. However, the number of molecular species that can be measured simultaneously is limited by the spectral overlap between fluorophores. Here we demonstrate a simple but general strategy to drastically increase the capacity for multiplex detection of molecules in single cells by using optical super-resolution microscopy (SRM) and combinatorial labeling. As a proof of principle, we labeled mRNAs with unique combinations of fluorophores using fluorescence in situ hybridization (FISH), and resolved the sequences and combinations of fluorophores with SRM. We measured mRNA levels of 32 genes simultaneously in single Saccharomyces cerevisiae cells. These experiments demonstrate that combinatorial labeling and super-resolution imaging of single cells is a natural approach to bring systems biology into single cells.", "title": "" }, { "docid": "7a67bccffa6222f8129a90933962e285", "text": "BACKGROUND\nPast research has found that playing a classic prosocial video game resulted in heightened prosocial behavior when compared to a control group, whereas playing a classic violent video game had no effect. Given purported links between violent video games and poor social behavior, this result is surprising. Here our aim was to assess whether this finding may be due to the specific games used. That is, modern games are experienced differently from classic games (more immersion in virtual environments, more connection with characters, etc.) and it may be that playing violent video games impacts prosocial behavior only when contemporary versions are used.\n\n\nMETHODS AND FINDINGS\nExperiments 1 and 2 explored the effects of playing contemporary violent, non-violent, and prosocial video games on prosocial behavior, as measured by the pen-drop task. We found that slight contextual changes in the delivery of the pen-drop task led to different rates of helping but that the type of game played had little effect. Experiment 3 explored this further by using classic games. Again, we found no effect.\n\n\nCONCLUSIONS\nWe failed to find evidence that playing video games affects prosocial behavior. Research on the effects of video game play is of significant public interest. It is therefore important that speculation be rigorously tested and findings replicated. Here we fail to substantiate conjecture that playing contemporary violent video games will lead to diminished prosocial behavior.", "title": "" }, { "docid": "25c41bdba8c710b663cb9ad634b7ae5d", "text": "Massive data streams are now fundamental to many data processing applications. For example, Internet routers produce large scale diagnostic data streams. Such streams are rarely stored in traditional databases, and instead must be processed “on the fly” as they are produced. Similarly, sensor networks produce multiple data streams of observations from their sensors. There is growing focus on manipulating data streams, and hence, there is a need to identify basic operations of interest in managing data streams, and to support them efficiently. We propose computation of the Hamming norm as a basic operation of interest. The Hamming norm formalises ideas that are used throughout data processing. When applied to a single stream, the Hamming norm gives the number of distinct items that are present in that data stream, which is a statistic of great interest in databases. When applied to a pair of streams, the Hamming norm gives an important measure of (dis)similarity: the number of unequal item counts in the two streams. Hamming norms have many uses in comparing data streams. We present a novel approximation technique for estimating the Hamming norm for massive data streams; this relies on what we call the “ l0 sketch” and we prove its accuracy. We test our approximation method on a large quantity of synthetic and real stream data, and show that the estimation is accurate to within a few percentage points. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 28th VLDB Conference, Hong Kong, China, 2002", "title": "" }, { "docid": "fa63fbdfc0be5f2675c5f65ee0798f88", "text": "Twitter is a micro blogging site where users review or tweet their approach i.e., opinion towards the service providers twitter page in words and it is useful to analyze the sentiments from it. Analyze means finding approach of users or customers where it is positive, negative, neutral, or in between positive-neutral or in between negative-neutral and represent it. In such a system or tool tweets are fetch from twitter regarding shopping websites, or any other twitter pages like some business, mobile brands, cloth brands, live events like sport match, election etc. get the polarity of it. These results will help the service provider to find out about the customers view toward their products.", "title": "" }, { "docid": "db806183810547435075eb6edd28d630", "text": "Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues.,,We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how the Tucker decomposition framework generalizes some of the latest VQA architectures, providing state-of-the-art results.", "title": "" }, { "docid": "a2c7ee4e586bc456ad6bfcdf3b1cc84b", "text": "We present a taxonomy of the Artificial Intelligence (AI) methods currently applied for algorithmic music composition. The area known as algorithmic music composition concerns the research on processes of composing music pieces automatically by a computer system. The use of AI for algorithmic music consists on the application of AI techniques as the main tools for the composition generation. There are several models of AI used in music composition such as: heuristics in evolutionary algorithms, neural networks, stochastic methods, generative models, agents, decision trees, declarative programming and grammatical representation. In this survey we present the trending in techniques for automatic music composition. We summarized several research projects of the last seven years and highlight the directions of music composition based on AI techniques.", "title": "" }, { "docid": "7feea3bcba08a889ba779a23f79556d7", "text": "In this report, monodispersed ultra-small Gd2O3 nanoparticles capped with hydrophobic oleic acid (OA) were synthesized with average particle size of 2.9 nm. Two methods were introduced to modify the surface coating to hydrophilic for bio-applications. With a hydrophilic coating, the polyvinyl pyrrolidone (PVP) coated Gd2O3 nanoparticles (Gd2O3-PVP) showed a reduced longitudinal T1 relaxation time compared with OA and cetyltrimethylammonium bromide (CTAB) co-coated Gd2O3 (Gd2O3-OA-CTAB) in the relaxation study. The Gd2O3-PVP was thus chosen for its further application study in MRI with an improved longitudinal relaxivity r1 of 12.1 mM(-1) s(-1) at 7 T, which is around 3 times as that of commercial contrast agent Magnevist(®). In vitro cell viability in HK-2 cell indicated negligible cytotoxicity of Gd2O3-PVP within preclinical dosage. In vivo MR imaging study of Gd2O3-PVP nanoparticles demonstrated considerable signal enhancement in the liver and kidney with a long blood circulation time. Notably, the OA capping agent was replaced by PVP through ligand exchange on the Gd2O3 nanoparticle surface. The hydrophilic PVP grants the Gd2O3 nanoparticles with a polar surface for bio-application, and the obtained Gd2O3-PVP could be used as an in vivo indicator of reticuloendothelial activity.", "title": "" } ]
scidocsrr
28ca600b750b41fb6cbc9233eafa950f
Hp-Apriori: Horizontal parallel-apriori algorithm for frequent itemset mining from big data
[ { "docid": "461ee7b6a61a6d375a3ea268081f80f5", "text": "In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions.", "title": "" } ]
[ { "docid": "2d953dda47c80304f8b2fa0d6e08c2f8", "text": "A facial recognition system is an application which is used for identifying or verifying a person from a digital image or a video frame. One of the ways to do this is by comparing selected facial features from the image and a facial database. It is generally used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. Areas such as network security, content indexing and retrieval, and video compression benefit from face recognition technology since people themselves are the main source of interest. Network access control via face recognition not only makes hackers virtually impossible to steal one's \"password\", but also increases the user friendliness in human-computer interaction. Although humans have always had the innate ability to recognize and distinguish between faces, yet computers only recently have shown the same ability. In the mid 1960s, scientists began work on using the computer to recognize human faces. Since then, facial recognition software has come a long way. In this article, I have explored the reasons behind using facial recognition, the products developed to implement this biometrics technique and also the criticisms and advantages that are bounded with it.", "title": "" }, { "docid": "12e088ccb86094d58c682e4071cce0a6", "text": "Are there systematic differences between people who use social network sites and those who stay away, despite a familiarity with them? Based on data from a survey administered to a diverse group of young adults, this article looks at the predictors of SNS usage, with particular focus on Facebook, MySpace, Xanga, and Friendster. Findings suggest that use of such sites is not randomly distributed across a group of highly wired users. A person's gender, race and ethnicity, and parental educational background are all associated with use, but in most cases only when the aggregate concept of social network sites is disaggregated by service. Additionally, people with more experience and autonomy of use are more likely to be users of such sites. Unequal participation based on user background suggests that differential adoption of such services may be contributing to digital inequality.", "title": "" }, { "docid": "a21d1956026b29bc67b92f8508a62e1c", "text": "We introduce several new formulations for sparse nonnegative matrix approximation. Subsequently, we solve these formulations by developing generic algorithms. Further, to help selecting a particular sparse formulation, we briefly discuss the interpretation of each formulation. Finally, preliminary experiments are presented to illustrate the behavior of our formulations and algorithms.", "title": "" }, { "docid": "3a80168bda1d5d92a5d767117581806a", "text": "During the last years a wide range of algorithms and devices have been made available to easily acquire range images. The increasing abundance of depth data boosts the need for reliable and unsupervised analysis techniques, spanning from part registration to automated segmentation. In this context, we focus on the recognition of known objects in cluttered and incomplete 3D scans. Locating and fitting a model to a scene are very important tasks in many scenarios such as industrial inspection, scene understanding, medical imaging and even gaming. For this reason, these problems have been addressed extensively in the literature. Several of the proposed methods adopt local descriptor-based approaches, while a number of hurdles still hinder the use of global techniques. In this paper we offer a different perspective on the topic: We adopt an evolutionary selection algorithm that seeks global agreement among surface points, while operating at a local level. The approach effectively extends the scope of local descriptors by actively selecting correspondences that satisfy global consistency constraints, allowing us to attack a more challenging scenario where model and scene have different, unknown scales. This leads to a novel and very effective pipeline for 3D object recognition, which is validated with an extensive set of experiments and comparisons with recent techniques at the state of the art.", "title": "" }, { "docid": "d0690dcac9bf28f1fe6e2153035f898c", "text": "The estimation of the homography between two views is a key step in many applications involving multiple view geometry. The homography exists between two views between projections of points on a 3D plane. A homography exists also between projections of all points if the cameras have purely rotational motion. A number of algorithms have been proposed for the estimation of the homography relation between two images of a planar scene. They use features or primitives ranging from simple points to a complex ones like non-parametric curves. Different algorithms make different assumptions on the imaging setup and what is known about them. This article surveys several homography estimation techniques from the literature. The essential theory behind each method is presented briefly and compared with the others. Experiments aimed at providing a representative analysis and comparison of the methods discussed are also presented in the paper.", "title": "" }, { "docid": "dba24c6bf3e04fc6d8b99a64b66cb464", "text": "Recommender systems have to serve in online environments which can be highly non-stationary.1. Traditional recommender algorithmsmay periodically rebuild their models, but they cannot adjust to quick changes in trends caused by timely information. In our experiments, we observe that even a simple, but online trained recommender model can perform significantly better than its batch version. We investigate online learning based recommender algorithms that can efficiently handle non-stationary data sets. We evaluate our models over seven publicly available data sets. Our experiments are available as an open source project2.", "title": "" }, { "docid": "0a367b44bec664e284bff945c6f9f6e6", "text": "In the past decades, Model Order Reduction (MOR) has demonstrated its robustness and wide applicability for simulating large-scale mathematical models in engineering and the sciences. Recently, MOR has been intensively further developed for increasingly complex dynamical systems. Wide applications of MOR have been found not only in simulation, but also in optimization and control. In this survey paper, we review some popular MOR methods for linear and nonlinear large-scale dynamical systems, mainly used in electrical and control engineering, in computational electromagnetics, as well as in microand nanoelectro-mechanical systems (NEMS/MEMS) design. This complements recent surveys on generating reduced-order models for parameter-dependent problems [37, 53, 169] which we do not consider here. Besides reviewing existing methods and the computational techniques needed to implement them, open issues are discussed, and some new results are proposed.", "title": "" }, { "docid": "3803c39e0445fcddc7e6565ac7efd33b", "text": "The terms 'quality-of-life', 'wellbeing' and 'happiness' denote different meanings; sometimes they are used as an umbrella term for all of value, and the other times to denote special merits. This paper is about the specific meanings of the terms. It proposes a classification based on two bi-partitions; between life 'chances' and life 'results', and between 'outer' and 'inner' qualities. Together these dichotomies imply four qualities of life: 1) livability of the environment, 2) life-ability of the individual, 3) external utility of life and 4) inner appreciation of life. This fourfold matrix is applied in three ways: firstly to place related notions and alternative classifications, secondly to explore substantive meanings in various measures for quality of life and thirdly to find out whether quality-of-life can be measured comprehensively. This last question is answered in the negative. Current sum-scores make little sense. The most inclusive summary measure is still how long and happily people live. There are many words that are used to indicate how well we are doing. Some of these signify overall thriving; currently the terms 'quality of life' and 'wellbeing' are used for this purpose, and sometimes the word 'health' . In the past the terms 'happiness' and 'welfare' were more commonly used. There are several problems with these terms. One problem is that these terms do not have an unequivocal meaning. Sometimes they are used as an umbrella for all that is good, but on other occasions they denote specific merit. For instance: the term 'wellbeing' is used to denote the quality of life-as-a-whole and to evaluate lifeaspects such as dwelling conditions or employment chances. Likewise, the phrase 'quality-of-life' refers in some contexts to the quality of society and in other instances to the happiness of its citizens. There is little view on a consensus on the meaning of these words; the trend is rather to divergence. Over time, connotations tend to become more specific and manifold. Discursive communities tend to develop their own quality-of-life notions. The second problem is in the connotation of inclusiveness. The use of the words as an umbrella term suggests that there is something as 'overall' quality of life, and that specific merits can be meaningfully added in some wider worth; however that holistic assumption is dubious. Philosophers have never agreed on one final definition of quality-of-life, and in the practice of empirical quality-of-life measurement we see comparisons of apples and pears. The above problem of many meanings is partly caused by the suggestion of inclusiveness. One of the reasons why the meanings become more specific is that the rhetoric of encompassingnes crumbles when put to practice. The broad overall meaning appears typically unfeasible in measurement and decision making. Hence connotations tend to become more specific and diverse. As result, rhetoric denotation of the overall good requires new terms periodically. New expressions pop up as against more narrow meanings. For instance, in the field of healthcare the term 'quality of life' emerged to convey the idea that there is more than mere quantity of survival time. Likewise, the word 'wellbeing' came into use in contrast to sheer economic 'welfare' . Yet, in the long run these new term fall victim to their success. Once they Correspondence to: Prof. Dr. Ruut Veenhoven Erasmus University Rotterdam, Faculty of Social Sciences, P.O.B. 1738 3000 DR Rotterdam, Netherlands. www2.eur.nl/fsw/research/veenhoven are adopted as a goal for policy, analysts and trend watchers start extracting palpable meanings and make the concepts ever more ‘multi-dimensional’. Obviously, this communicative practice causes much confusion and impedes the development of knowledge in this field. In reaction, there have been many proposals for standard definitions . Unfortunately, this has not really helped. Firstly, such scientific definitions hardly affect the common use of language. Secondly, they add to the confusion, because scholars are not able to agree on one meaning either, for instance, McCall 1975) defines quality-of-life as 'necessary conditions for happiness', while (Terhune 1973) defines it as subjective satisfaction itself. Likewise, Colby (1987) describes wellbeing as 'adaptive potential', whereas Jolles & Stalpers (1978: 31) define it as 'basic commend to life'. Elsewhere I have listed fifteen definitions of happiness (Veenhoven 1984:16-17). Recently Noll (1999) listed many meanings of quality of life in nations. Since we cannot really force the use of words, we can better try to clarify their use. We can elucidate the matter by distinguishing different meanings. An analytic tool for this purpose is proposed in this article. First a fourfold classification of qualities of life is presented (§ 1). By means of this taxonomy common terms and distinctions are placed (§ 2). The matrix is then used to chart substantive meanings in common measures of the good life (§ 3). Next the question is raised whether we can meaningfully speak about comprehensive quality of life (§ 4). 1. GROUPING QUALITIES OF LIFE Terms like 'quality of life', 'wellbeing' and 'happiness' denote evaluations. When sorting out what kind of evaluation they aim at, we must establish what thing is evaluated by what standard. 1.1 Quality of what life? In the case of 'quality of life' the object of evaluation is 'life'. Mostly that life is an individual life, the quality of life of a person. Yet the term is also used for aggregates, for instance when we speak about the quality-of-life of women. In that case the term refers usually to the average of individuals. Sometimes the term is used in reference to humanity as a whole. In this context the object of evaluation is mostly the average individual, and the long-term destiny of the species. The evaluation then concerns 'human life', rather than 'human lives'. The term 'quality of life' does not refer exclusively to human life. It is also used for animals, for instance in discussions about conditions of slaughter cattle. At a higher level of abstraction it is also used for all life. Quality of life is then the condition of the entire ecosystem. Ecological propagandists like this confusion of object matter, because it suggests that protection of endangered species is also good for the individual human. The terms 'wellbeing' and 'happiness' denote even more varied objects of evaluation, because they are also used in reference to social systems. When speaking about the 'public wellbeing' or the 'happiness of the nation' we often aim at the collective level, how well society functions and maintains itself. Propagandists also exploit this ambiguity, in this case as a means to disguise differences in interest between individuals and society. In this paper I focus on the quality of individual human lives. As we will see, that is difficult enough. Ruut Veenhoven 2 Four qualities of life", "title": "" }, { "docid": "52f912cd5a8def1122d7ce6ba7f47271", "text": "System event logs have been frequently used as a valuable resource in data-driven approaches to enhance system health and stability. A typical procedure in system log analytics is to first parse unstructured logs, and then apply data analysis on the resulting structured data. Previous work on parsing system event logs focused on offline, batch processing of raw log files. But increasingly, applications demand online monitoring and processing. We propose an online streaming method Spell, which utilizes a longest common subsequence based approach, to parse system event logs. We show how to dynamically extract log patterns from incoming logs and how to maintain a set of discovered message types in streaming fashion. Evaluation results on large real system logs demonstrate that even compared with the offline alternatives, Spell shows its superiority in terms of both efficiency and effectiveness.", "title": "" }, { "docid": "6620c12ab567b90aec86b4d2f8532fe4", "text": "Within the framework of wavelet analysis, we describe a novel technique for removing noise from astrophysical images. We design a Bayesian estimator, which relies on a particular member of the family of isotropic /spl alpha/-stable distributions, namely the bivariate Cauchy density. Using the bivariate Cauchy model we develop a noise-removal processor that takes into account the interscale dependencies of wavelet coefficients. We show through simulations that our proposed technique outperforms existing methods both visually and in terms of root mean squared error.", "title": "" }, { "docid": "fd00280d739348aeb3247936793e22af", "text": "Canal of Nuck abnormalities are a rare but important cause of morbidity in girls, most often those younger than 5 years of age. The canal of Nuck, which is the female equivalent of the male processus vaginalis, is a protrusion of parietal peritoneum that extends through the inguinal canal and terminates in the labia majora. The canal typically obliterates early in life, but in some cases the canal can partially or completely fail to close, potentially resulting in a hydrocele or hernia of pelvic contents. Recognition of this entity is especially important in cases of ovarian hernia due to the risk of incarceration and torsion. We aim to increase awareness of this condition by reviewing the embryology, anatomy and diagnosis of canal of Nuck disorders with imaging findings on US, CT and MRI using several cases from a single institution.", "title": "" }, { "docid": "c9a18fc3919462cc232b0840a4844ae2", "text": "Systematic gene expression analyses provide comprehensive information about the transcriptional response to different environmental and developmental conditions. With enough gene expression data points, computational biologists may eventually generate predictive computer models of transcription regulation. Such models will require computational methodologies consistent with the behavior of known biological systems that remain tractable. We represent regulatory relationships between genes as linear coefficients or weights, with the \"net\" regulation influence on a gene's expression being the mathematical summation of the independent regulatory inputs. Test regulatory networks generated with this approach display stable and cyclically stable gene expression levels, consistent with known biological systems. We include variables to model the effect of environmental conditions on transcription regulation and observed various alterations in gene expression patterns in response to environmental input. Finally, we use a derivation of this model system to predict the regulatory network from simulated input/output data sets and find that it accurately predicts all components of the model, even with noisy expression data.", "title": "" }, { "docid": "47866c8eb518f962213e3a2d8c3ab8d3", "text": "With the increasing fears of the impacts of the high penetration rates of Photovoltaic (PV) systems, a technical study about their effects on the power quality metrics of the utility grid is required. Since such study requires a complete modeling of the PV system in an electromagnetic transient software environment, PSCAD was chosen. This paper investigates a grid-tied PV system that is prepared in PSCAD. The model consists of PV array, DC link capacitor, DC-DC buck converter, three phase six-pulse inverter, AC inductive filter, transformer and a utility grid equivalent model. The paper starts with investigating the tasks of the different blocks of the grid-tied PV system model. It also investigates the effect of variable atmospheric conditions (irradiation and temperature) on the performance of the different components in the model. DC-DC converter and inverter in this model use PWM and SPWM switching techniques, respectively. Finally, total harmonic distortion (THD) analysis on the inverter output current at PCC will be applied and the obtained THD values will be compared with the limits specified by the regulating standards such as IEEE Std 519-1992.", "title": "" }, { "docid": "991b788240a35167964345b31da96ffb", "text": "Intimate partner violence (IPV) has been recognised as a significant problem amongst forcibly displaced communities, and great progress has been made by the United Nations High Commission for Refugees (UNHCR) in responding to IPV and other forms of sexual and gender based violence. However, they have not always effectively engaged refugee communities in these activities, with potentially negative consequences for the health and protection of women. This study was conducted in Kakuma refugee camp, north-west Kenya. Eighteen focus group discussions were conducted with 157 refugees from various nationalities, including Sudanese, Somali, Ethiopian, and Congolese. They focused on the nature and consequences of IPV in Kakuma. The aim of this paper is to explore how refugees in Kakuma talk about the ways that IPV is dealt with, focusing particularly on the ways that community responses are said to interact with formal response systems established by UNHCR and its implementing partners. Refugees talked about using a 'hierarchy of responses' to IPV, with only particularly serious or intransigent cases reaching UNHCR or its implementing agencies. Some male refugees described being mistrustful of agency responses, because agencies were believed to favour women and to prioritise protecting the woman at all costs, even if that means separating her from the family. Whilst community responses to IPV might often be appropriate and helpful, the findings of the current study suggest that in Kakuma they do not necessarily result in the protection of women. Yet women in Kakuma are reported to be reluctant to report their cases to UNHCR and its implementing agencies. A more effective protection response from UNHCR might involve closer co-operation with individuals and structures within the refugee communities to develop a co-ordinated response to IPV.", "title": "" }, { "docid": "e14b936ecee52765078d77088e76e643", "text": "In this paper, a novel code division multiplexing (CDM) algorithm-based reversible data hiding (RDH) scheme is presented. The covert data are denoted by different orthogonal spreading sequences and embedded into the cover image. The original image can be completely recovered after the data have been extracted exactly. The Walsh Hadamard matrix is employed to generate orthogonal spreading sequences, by which the data can be overlappingly embedded without interfering each other, and multilevel data embedding can be utilized to enlarge the embedding capacity. Furthermore, most elements of different spreading sequences are mutually cancelled when they are overlappingly embedded, which maintains the image in good quality even with a high embedding payload. A location-map free method is presented in this paper to save more space for data embedding, and the overflow/underflow problem is solved by shrinking the distribution of the image histogram on both the ends. This would further improve the embedding performance. Experimental results have demonstrated that the CDM-based RDH scheme can achieve the best performance at the moderate-to-high embedding capacity compared with other state-of-the-art schemes.", "title": "" }, { "docid": "9f25bc7a2dadb2b8c0d54ac6e70e92e5", "text": "Our research suggests that ML technologies will indeed grow more pervasive, but within job categories, what we define as the “suitability for machine learning” (SML) of work tasks varies greatly. We further propose that our SML rubric, illustrating the variability in task-level SML, can serve as an indicator for the potential reorganization of a job or an occupation because the set of tasks that form a job can be separated and re-bundled to redefine the job. Evaluating worker activities using our rubric, in fact, has the benefit of focusing on what ML can do instead of grouping all forms of automation together.", "title": "" }, { "docid": "af25bc1266003202d3448c098628aee8", "text": "Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR10, CIFAR-100, and SVHN datasets, yielding new state-ofthe-art results of 2.56%, 15.20%, and 1.30% test error respectively. Code available at https://github.com/ uoguelph-mlrg/Cutout.", "title": "" }, { "docid": "6924a393f4c1b025ba949ea70ca1ba70", "text": "We present Project Zanzibar: a flexible mat that can locate, uniquely identify and communicate with tangible objects placed on its surface, as well as sense a user's touch and hover hand gestures. We describe the underlying technical contributions: efficient and localised Near Field Communication (NFC) over a large surface area; object tracking combining NFC signal strength and capacitive footprint detection, and manufacturing techniques for a rollable device form-factor that enables portability, while providing a sizable interaction area when unrolled. In addition, we detail design patterns for tangibles of varying complexity and interactive capabilities, including the ability to sense orientation on the mat, harvest power, provide additional input and output, stack, or extend sensing outside the bounds of the mat. Capabilities and interaction modalities are illustrated with self-generated applications. Finally, we report on the experience of professional game developers building novel physical/digital experiences using the platform.", "title": "" }, { "docid": "ede2ac0db923cf825853486f92ed19cf", "text": "Personalized recommendation has become increasingly pervasive nowadays. Users receive recommendations on products, movies, point-of-interests and other online services. Traditional collaborative filtering techniques have demonstrated effectiveness in a wide range of recommendation tasks, but they are unable to capture complex relationships between users and items. There is a surge of interest in applying deep learning to recommender systems due to its nonlinear modeling capacity and recent success in other domains such as computer vision and speech recognition. However, prior work does not incorporate contexual information, which is usually largely available in many recommendation tasks. In this paper, we propose a deep learning based model for contexual recommendation. Specifically, the model consists of a denoising autoencoder neural network architecture augmented with a context-driven attention mechanism, referred to as Attentive Contextual Denoising Autoencoder (ACDA). The attention mechanism is utilized to encode the contextual attributes into the hidden representation of the user's preference, which associates personalized context with each user's preference to provide recommendation targeted to that specific user. Experiments conducted on multiple real-world datasets from Meetup and Movielens on event and movie recommendations demonstrate the effectiveness of the proposed model over the state-of-the-art recommenders.", "title": "" }, { "docid": "35e377e94b9b23283eabf141bde029a2", "text": "We present a global optimization approach to optical flow estimation. The approach optimizes a classical optical flow objective over the full space of mappings between discrete grids. No descriptor matching is used. The highly regular structure of the space of mappings enables optimizations that reduce the computational complexity of the algorithm's inner loop from quadratic to linear and support efficient matching of tens of thousands of nodes to tens of thousands of displacements. We show that one-shot global optimization of a classical Horn-Schunck-type objective over regular grids at a single resolution is sufficient to initialize continuous interpolation and achieve state-of-the-art performance on challenging modern benchmarks.", "title": "" } ]
scidocsrr
903867c61520437eae8cc588e0312739
New Flexible Silicone-Based EEG Dry Sensor Material Compositions Exhibiting Improvements in Lifespan, Conductivity, and Reliability
[ { "docid": "8415585161d51b500f99aa36650a67d9", "text": "A brain-computer interface (BCI) is a communication system that can help users interact with the outside environment by translating brain signals into machine commands. The use of electroencephalographic (EEG) signals has become the most common approach for a BCI because of their usability and strong reliability. Many EEG-based BCI devices have been developed with traditional wet- or micro-electro-mechanical-system (MEMS)-type EEG sensors. However, those traditional sensors have uncomfortable disadvantage and require conductive gel and skin preparation on the part of the user. Therefore, acquiring the EEG signals in a comfortable and convenient manner is an important factor that should be incorporated into a novel BCI device. In the present study, a wearable, wireless and portable EEG-based BCI device with dry foam-based EEG sensors was developed and was demonstrated using a gaming control application. The dry EEG sensors operated without conductive gel; however, they were able to provide good conductivity and were able to acquire EEG signals effectively by adapting to irregular skin surfaces and by maintaining proper skin-sensor impedance on the forehead site. We have also demonstrated a real-time cognitive stage detection application of gaming control using the proposed portable device. The results of the present study indicate that using this portable EEG-based BCI device to conveniently and effectively control the outside world provides an approach for researching rehabilitation engineering.", "title": "" } ]
[ { "docid": "8acf348ea6019eac856b01b0f4012f9c", "text": "Advanced high-voltage (10 kV-15 kV) silicon carbide (SiC) power MOSFETs described in this paper have the potential to significantly impact the system performance, size, weight, high-temperature reliability, and cost of next-generation energy conversion and transmission systems. In this paper, we report our recently developed 10 kV/20 A SiC MOSFETs with a chip size of 8.1 × 8.1 mm2 and a specific on-resistance (RON, SP) of 100 MΩ-cm2 at 25 °C. We also developed 15 kV/10 A SiC power MOSFETs with a chip size of 8 × 8 mm2 and a RON, SP of 204 mQ cm2 at 25 °C. To our knowledge, this 15 kV SiC MOSFET is the highest voltage rated unipolar power switch. Compared to the commercial 6.5 kV Silicon (Si) IGBTs, these 10 kV and 15 kV SiC MOSFETs exhibit extremely low switching losses even when they are switched at 2-3× higher voltage. The benefits of using these 10 kV and 15 kV SiC MOSFETs include simplifying from multilevel to two-level topology and removing the need for time-interleaving by improving the switching frequency from a few hundred Hz for Si based systems to ≥ 10 kHz for hard-switched SiC based systems.", "title": "" }, { "docid": "becbcb6ca7ac87a3e43dbc65748b258a", "text": "We present Mean Box Pooling, a novel visual representation that pools over CNN representations of a large number, highly overlapping object proposals. We show that such representation together with nCCA, a successful multimodal embedding technique, achieves state-of-the-art performance on the Visual Madlibs task. Moreover, inspired by the nCCA’s objective function, we extend classical CNN+LSTM approach to train the network by directly maximizing the similarity between the internal representation of the deep learning architecture and candidate answers. Again, such approach achieves a significant improvement over the prior work that also uses CNN+LSTM approach on Visual Madlibs.", "title": "" }, { "docid": "c3b1ad57bab87d796562a771d469b18d", "text": "The focus of this paper is on one diode photovoltaic cell model. The theory as well as the construction and working of photovoltaic cells using single diode method is also presented. Simulation studies are carried out with different temperatures. Based on this study a conclusion is drawn with comparison with ideal diode. General TermssIn recent years, significant photovoltaic (PV) deployment has occurred, particularly in Germany, Spain and Japan [1]. Also, PV energy is going to become an important source in coming years in Portugal, as it has highest source of sunshine radiation in Europe. Presently the tenth largest PV power plant in the world is in Moura, Portugal, which has an installed capacity of 46 MW and aims to reach 1500 MW of installed capacity by 2020, as stated by the Portuguese National Strategy ENE 2020, multiplying tenfold the existing capacity [2]. The solar cells are basically made of semiconductors which are manufactured using different process. These semiconductors [4]. The intrinsic properties and the incoming solar radiation are responsible for the type of electric energy produced [5]. The solar radiation is composed of photons of different energies, and some are absorbed at the p-n junction. Photons with energies lower than the bandgap of the solar cell are useless and generate no voltage or electric current. Photons with energy superior to the band gap generate electricity, but only the energy corresponding to the band gap is used. The remainder of energy is dissipated as heat in the body of the solar cell [6]. KeywordssPV cell, solar cell, one diode model", "title": "" }, { "docid": "b137e24f41def95c5bb4776de48804ef", "text": "Adequate sleep is essential for general healthy functioning. This paper reviews recent research on the effects of chronic sleep restriction on neurobehavioral and physiological functioning and discusses implications for health and lifestyle. Restricting sleep below an individual's optimal time in bed (TIB) can cause a range of neurobehavioral deficits, including lapses of attention, slowed working memory, reduced cognitive throughput, depressed mood, and perseveration of thought. Neurobehavioral deficits accumulate across days of partial sleep loss to levels equivalent to those found after 1 to 3 nights of total sleep loss. Recent experiments reveal that following days of chronic restriction of sleep duration below 7 hours per night, significant daytime cognitive dysfunction accumulates to levels comparable to that found after severe acute total sleep deprivation. Additionally, individual variability in neurobehavioral responses to sleep restriction appears to be stable, suggesting a trait-like (possibly genetic) differential vulnerability or compensatory changes in the neurobiological systems involved in cognition. A causal role for reduced sleep duration in adverse health outcomes remains unclear, but laboratory studies of healthy adults subjected to sleep restriction have found adverse effects on endocrine functions, metabolic and inflammatory responses, suggesting that sleep restriction produces physiological consequences that may be unhealthy.", "title": "" }, { "docid": "21324c71d70ca79d2f2c7117c759c915", "text": "The wide-spread of social media provides unprecedented sources of written language that can be used to model and infer online demographics. In this paper, we introduce a novel visual text analytics system, DemographicVis, to aid interactive analysis of such demographic information based on user-generated content. Our approach connects categorical data (demographic information) with textual data, allowing users to understand the characteristics of different demographic groups in a transparent and exploratory manner. The modeling and visualization are based on ground truth demographic information collected via a survey conducted on Reddit.com. Detailed user information is taken into our modeling process that connects the demographic groups with features that best describe the distinguishing characteristics of each group. Features including topical and linguistic are generated from the user-generated contents. Such features are then analyzed and ranked based on their ability to predict the users' demographic information. To enable interactive demographic analysis, we introduce a web-based visual interface that presents the relationship of the demographic groups, their topic interests, as well as the predictive power of various features. We present multiple case studies to showcase the utility of our visual analytics approach in exploring and understanding the interests of different demographic groups. We also report results from a comparative evaluation, showing that the DemographicVis is quantitatively superior or competitive and subjectively preferred when compared to a commercial text analysis tool.", "title": "" }, { "docid": "feeb51ad0c491c86a6018e92e728c3f0", "text": "This paper discusses why traditional reinforcement learning methods, and algorithms applied to those models, result in poor performance in situated domains characterized by multiple goals, noisy state, and inconsistent reinforcement. We propose a methodology for designing reinforcement functions that take advantage of implicit domain knowledge in order to accelerate learning in such domains. The methodology involves the use of heterogeneous reinforcement functions and progress estimators, and applies to learning in domains with a single agent or with multiple agents. The methodology is experimentally validated on a group of mobile robots learning a foraging task.", "title": "" }, { "docid": "10abe464698cf38cce7df46718dfa83c", "text": "We have developed an approach using Bayesian networks to predict protein-protein interactions genome-wide in yeast. Our method naturally weights and combines into reliable predictions genomic features only weakly associated with interaction (e.g., messenger RNAcoexpression, coessentiality, and colocalization). In addition to de novo predictions, it can integrate often noisy, experimental interaction data sets. We observe that at given levels of sensitivity, our predictions are more accurate than the existing high-throughput experimental data sets. We validate our predictions with TAP (tandem affinity purification) tagging experiments. Our analysis, which gives a comprehensive view of yeast interactions, is available at genecensus.org/intint.", "title": "" }, { "docid": "5dddbc2b2c53436c9d2176045118dce5", "text": "This work introduces a method to tune a sequence-based generative model for molecular de novo design that through augmented episodic likelihood can learn to generate structures with certain specified desirable properties. We demonstrate how this model can execute a range of tasks such as generating analogues to a query structure and generating compounds predicted to be active against a biological target. As a proof of principle, the model is first trained to generate molecules that do not contain sulphur. As a second example, the model is trained to generate analogues to the drug Celecoxib, a technique that could be used for scaffold hopping or library expansion starting from a single molecule. Finally, when tuning the model towards generating compounds predicted to be active against the dopamine receptor type 2, the model generates structures of which more than 95% are predicted to be active, including experimentally confirmed actives that have not been included in either the generative model nor the activity prediction model. Graphical abstract .", "title": "" }, { "docid": "73b150681d7de50ada8e046a3027085f", "text": "We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. For language understanding tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children’s Book Test, where it obtains competitive performance, reading the story in a single pass.", "title": "" }, { "docid": "091eedcd69373f99419a745f2215e345", "text": "Society is increasingly reliant upon complex and interconnected cyber systems to conduct daily life activities. From personal finance to managing defense capabilities to controlling a vast web of aircraft traffic, digitized information systems and software packages have become integrated at virtually all levels of individual and collective activity. While such integration has been met with immense increases in efficiency of service delivery, it has also been subject to a diverse body of threats from nefarious hackers, groups, and even state government bodies. Such cyber threats have shifted over time to affect various cyber functionalities, such as with Direct Denial of Service (DDoS), data theft, changes to data code, infection via computer virus, and many others.", "title": "" }, { "docid": "923b4025d22bc146c53fb4c90f43ef72", "text": "In this paper we describe preliminary approaches for contentbased recommendation of Pinterest boards to users. We describe our representation and features for Pinterest boards and users, together with a supervised recommendation model. We observe that features based on latent topics lead to better performance than features based on userassigned Pinterest categories. We also find that using social signals (repins, likes, etc.) can improve recommendation quality.", "title": "" }, { "docid": "885764d7e71711b8f9a086d43c6e4f9a", "text": "In Indian economy, Agriculture is the most important branch and 70 percentage of rural population livelihood depends on agricultural work. Farming is the one of the important part of Agriculture. Crop yield depends on environment’s factors like precipitation, temperature, evapotranspiration, etc. Generally farmers cultivate crop, based on previous experience. But nowadays, the uncertainty increased in environment. So, accurate analysis of historic data of environment parameters should be done for successful farming. To get more harvest, we should also do the analysis of previous cultivation data. The Prediction of crop yield can be done based on historic crop cultivation data and weather data using data mining methods. This paper describes the role of data mining in Agriculture and crop yield prediction. This paper also describes Groundnut crop yield prediction analysis and Naive Bayes Method.", "title": "" }, { "docid": "49802c20c3912143ab371caca7b5c9d5", "text": "Control theory has recently started to be applied to software engineering domain, mostly for managing the behavior of adaptive software systems under external disturbances. In general terms, the main advantage of control theory is that it can be formally proven that controllers achieve their goals (with certain characteristics), whereas the price to pay is that controllers and system-to-be-controlled have to be modeled by equations. The investigation of how suited are control theory techniques to address performance problems is, however, still at the beginning. In this paper we devise the main challenges behind the adoption of control theory in the context of Software Performance Engineering applied to adaptive software systems.", "title": "" }, { "docid": "0b087e7e36bef7a6d92b8e44bd22047a", "text": "We investigated whether the dynamics of head and facial movements apart from specific facial expressions communicate affect in infants. Age-appropriate tasks were used to elicit positive and negative affect in 28 ethnically diverse 12-month-old infants. 3D head and facial movements were tracked from 2D video. Strong effects were found for both head and facial movements. For head movement, angular velocity and angular acceleration of pitch, yaw, and roll were higher during negative relative to positive affect. For facial movement, displacement, velocity, and acceleration also increased during negative relative to positive affect. Our results suggest that the dynamics of head and facial movements communicate affect at ages as young as 12 months. These findings deepen our understanding of emotion communication and provide a basis for studying individual differences in emotion in socio-emotional development.", "title": "" }, { "docid": "7b1f880c76d50f9bdec264ac589424c0", "text": "In the software design, protecting a computer system from a plethora of software attacks or malware in the wild has been increasingly important. One branch of research to detect the existence of attacks or malware, there has been much work focused on modeling the runtime behavior of a program. Stemming from the seminal work of Forrest et al., one of the main tools to model program behavior is system call sequences. Unfortunately, however, since mimicry attacks were proposed, program behavior models based solely on system call sequences could no longer ensure the security of systems and require additional information that comes with its own drawbacks. In this paper, we report our preliminary findings in our research to build a mimicry resilient program behavior model that has lesser drawbacks. We employ branch sequences to harden our program behavior model against mimicry attacks while employing hardware features for efficient extraction of such branch information during program runtime. In order to handle the large scale of branch sequences, we also employ LSTM, the de facto standard in deep learning based sequence modeling and report our preliminary experiments on its interaction with program branch sequences.", "title": "" }, { "docid": "99d57cef03e21531be9f9663ec023987", "text": "Anton Schwartz Dept. of Computer Science Stanford University Stanford, CA 94305 Email: schwartz@cs.stanford.edu Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. This paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks. Skills collapse whole action sequences into single operators. They are learned by minimizing the compactness of action policies, using a description length argument on their representation. Empirical results in simple grid navigation tasks illustrate the successful discovery of structure in reinforcement learning.", "title": "" }, { "docid": "62132ea78d0b5aa844ff25647159eedb", "text": "Gate turn offs (GTOs) have an inherent minimum ON-OFF time, which is needed for their safe operation. For GTO-based three-level or neutral-point-clamped (NPC) inverters, this minimum ON-OFF pulsewidth limitation results in a distortion of the output voltage and current waveforms, especially in the low index modulation region. Some approaches have been previously proposed to compensate for the minimum ON pulse. However, these methods increase the inverter switching losses. Two new methods of pulsewidth-modulation (PWM) control based on: 1) adding a bias to the reference voltage of the inverter and 2) switching patterns are presented. The former method improves the output waveforms, but increases the switching losses; while the latter improves the output waveforms without increasing the switching losses. The fluctuations of the neutral-point voltage are also reduced using this method. The theoretical and practical aspects as well as the experimental results are presented in this paper.", "title": "" }, { "docid": "ed34383cada585951e1dcc62445d08c2", "text": "The increasing volume of e-mail and other technologically enabled communications are widely regarded as a growing source of stress in people’s lives. Yet research also suggests that new media afford people additional flexibility and control by enabling them to communicate from anywhere at any time. Using a combination of quantitative and qualitative data, this paper builds theory that unravels this apparent contradiction. As the literature would predict, we found that the more time people spent handling e-mail, the greater was their sense of being overloaded, and the more e-mail they processed, the greater their perceived ability to cope. Contrary to assumptions of prior studies, we found no evidence that time spent working mediates e-mail-related overload. Instead, e-mail’s material properties entwined with social norms and interpretations in a way that led informants to single out e-mail as a cultural symbol of the overload they experience in their lives. Moreover, by serving as a symbol, e-mail distracted people from recognizing other sources of overload in their work lives. Our study deepens our understanding of the impact of communication technologies on people’s lives and helps untangle those technologies’ seemingly contradictory influences.", "title": "" }, { "docid": "ef39b902bb50be657b3b9626298da567", "text": "We consider the problem of node positioning in ad hoc networks. We propose a distributed, infrastructure-free positioning algorithm that does not rely on GPS (Global Positioning System). Instead, the algorithm uses the distances between the nodes to build a relative coordinate system in which the node positions are computed in two dimensions. Despite the distance measurement errors and the motion of the nodes, the algorithm provides sufficient location information and accuracy to support basic network functions. Examples of applications where this algorithm can be used include Location Aided Routing [10] and Geodesic Packet Forwarding [2]. Another example are sensor networks, where mobility is less of a problem. The main contribution of this work is to define and compute relative positions of the nodes in an ad hoc network without using GPS. We further explain how the proposed approach can be applied to wide area ad hoc networks.", "title": "" } ]
scidocsrr
1eeab6430a581042b5b4026277bd91bd
A Novel Robotic Platform for Aerial Manipulation Using Quadrotors as Rotating Thrust Generators
[ { "docid": "c711fa74e32891553404b989c1ee1b44", "text": "This paper presents a fully actuated UAV platform with a nonparallel design. Standard multirotor UAVs equipped with a number of parallel thrusters would result in underactuation. Fighting horizontal wind would require the robot to tilt its whole body toward the direction of the wind. We propose a hexrotor UAV with nonparallel thrusters which results in faster response to disturbances for precision position keeping. A case study is presented to show that hexrotor with a nonparallel design takes less time to resist wind gust than a standard design. We also give the results of a staged peg-in-hole task that measures the rising time of exerting forces using different actuation mechanisms.", "title": "" } ]
[ { "docid": "1197bc22d825a53c2b9e6ff068e10353", "text": "CONTEXT\nPermanent evaluation of end-user satisfaction and continuance intention is a critical issue at each phase of a clinical information system (CIS) project, but most validation studies are concerned with the pre- or early post-adoption phases.\n\n\nOBJECTIVE\nThe purpose of this study was twofold: to validate at the Pompidou University Hospital (HEGP) an information technology late post-adoption model built from four validated models and to propose a unified metamodel of evaluation that could be adapted to each context or deployment phase of a CIS project.\n\n\nMETHODS\nFive dimensions, i.e., CIS quality (CISQ), perceived usefulness (PU), confirmation of expectations (CE), user satisfaction (SAT), and continuance intention (CI) were selected to constitute the CI evaluation model. The validity of the model was tested using the combined answers to four surveys performed between 2011 and 2015, i.e., more than ten years after the opening of HEGP in July 2000. Structural equation modeling was used to test the eight model-associated hypotheses.\n\n\nRESULTS\nThe multi-professional study group of 571 responders consisted of 158 doctors, 282 nurses, and 131 secretaries. The evaluation model accounted for 84% of variance of satisfaction and 53% of CI variance for the period 2011-2015 and for 92% and 69% for the period 2014-2015. In very late post adoption, CISQ appears to be the major determinant of satisfaction and CI. Combining the results obtained at various phases of CIS deployment, a Unified Model of Information System Continuance (UMISC) is proposed.\n\n\nCONCLUSION\nIn a meaningful CIS use situation at HEGP, this study confirms the importance of CISQ in explaining satisfaction and CI. The proposed UMISC model that can be adapted to each phase of CIS deployment could facilitate the necessary efforts of permanent CIS acceptance and continuance evaluation.", "title": "" }, { "docid": "9dbd988e0e7510ddf4fce9d5a216f9d6", "text": "Tooth abutments can be prepared to receive fixed dental prostheses with different types of finish lines. The literature reports different complications arising from tooth preparation techniques, including gingival recession. Vertical preparation without a finish line is a technique whereby the abutments are prepared by introducing a diamond rotary instrument into the sulcus to eliminate the cementoenamel junction and to create a new prosthetic cementoenamel junction determined by the prosthetic margin. This article describes 2 patients whose dental abutments were prepared to receive ceramic restorations using vertical preparation without a finish line.", "title": "" }, { "docid": "4e73acdb2458cbcb30b5ec173d88a1f9", "text": "The research objective of this work was to understand pedestrians’ behavior and interaction with vehicles during pre-crash scenarios that provides critical information on how to improve pedestrian safety. In this study, we recruited 110 cars and their drivers in the greater Indianapolis area for a one year naturalistic driving study starting in March 2012. The drivers were selected based on their geographic, demographic, and driving route representativeness. We used off-the-shelf vehicle black boxes for data recording, which are installed at the front windshield behind the rear-view mirrors. It records highresolution forward-view videos (recording driving views outside of front windshield), GPS information, and G-sensor information. We developed category-based multi-stage pedestrian detection and behavior analysis tools to efficiently process this large scale driving dataset. To ensure the accuracy, we incorporated the human-in-loop process to verify the automatic pedestrian detection results. For each pedestrian event, we generate a 5-second video to further study potential conflicts between pedestrians and vehicle. For each detected potential conflict event, we generate a 15second video to analyze pedestrian behavior. We conduct in-depth analysis of pedestrian behavior in regular and near-miss scenarios using the naturalistic data. We observed pedestrian and vehicle interaction videos and studied what scenarios might be more dangerous and could more likely to result in potential conflicts. We observed: 1) Children alone as pedestrians is an elevated risk; 2) three or more adults may be more likely to result in potential conflicts with vehicles than one or two adults; 3) parking lots, communities, school areas, shopping malls, etc. could have more potential conflicts than regular urban/rural driving environments; 4) when pedestrian is crossing road, there is much higher potential conflict than pedestrian walking along/against traffic; 5) There is an elevated risk for pedestrians walking in road (where vehicles can drive by); 6) when pedestrians are jogging, it is much more likely to have potential conflict than walking or standing.; and 7) it is more likely to have potential conflict at cross walk and junction than other road types. Furthermore, we estimated the pedestrian appearance points of all potential conflict events and time to collision (TTC). Most potential conflict events have a TTC value ranging from 1 second to 6 seconds, with the range of 2 seconds to 4 seconds being associated with highest percentages of all the cases. The mean value of TTC is 3.84 seconds with standard deviation of 1.74 seconds. To date, we have collected about 65TB of driving data with about 1.1 million miles. We have processed about 50% of the data. We are continuously working on the data collection and processing. There could be some changes in our observation results after including all data. But the existing analysis is based on a quite large-scale data and would provide a good estimation.", "title": "" }, { "docid": "d0370d33988698cf69e3b032aff53f49", "text": "The abundance of discussion forums, Weblogs, e-commerce portals, social networking, product review sites and content sharing sites has facilitated flow of ideas and expression of opinions. The user-generated text content on Internet and Web 2.0 social media can be a rich source of sentiments, opinions, evaluations, and reviews. Sentiment analysis or opinion mining has become an open research domain that involves classifying text documents based on the opinion expressed, about a given topic, being positive or negative. This paper proposes a sentiment classification model using back-propagation artificial neural network (BPANN). Information Gain, and three popular sentiment lexicons are used to extract sentiment representing features that are then used to train and test the BPANN. This novel approach combines the strength of BPANN in classification accuracy with intrinsic subjectivity knowledge available in the sentiment lexicons. The results obtained from experiments on the movie and hotel review corpora have shown that the proposed approach has been able to reduce dimensionality, while producing accurate results for sentiment based classification of text.", "title": "" }, { "docid": "b6a8abc8946f8b13a22e3bacd2a6caa5", "text": "The aim of this research was to determine the sun protection factor (SPF) of sunscreens emulsions containing chemical and physical sunscreens by ultraviolet spectrophotometry. Ten different commercially available samples of sunscreen emulsions of various manufactures were evaluated. The SPF labeled values were in the range of 8 to 30. The SPF values of the 30% of the analyzed samples are in close agreement with the labeled SPF, 30% presented SPF values above the labeled amount and 40% presented SPF values under the labeled amount. The proposed spectrophotometric method is simple and rapid for the in vitro determination of SPF values of sunscreens emulsions. *Correspondence:", "title": "" }, { "docid": "6c0a6c095516829189a51ac3dc49619f", "text": "We discuss non-volatile SRAM cells capable of storing multiple bits and their applications as multi-context configuration memory. The cells are based on the standard 6T SRAM with multiple pairs of programmable resistors such as magnetic tunnel junction or resistive memory elements. In one of the cell designs the active state of the SRAM can be switched in one clock cycle by the use of an additional equalizer transistor, without the need to turn off the power to the cell, allowing real-time and low energy switching between different contexts in reconfigurable circuits. Other variations of the multistate non-volatile SRAM cells are also discussed.", "title": "" }, { "docid": "9df6a4c0143cfc3a0b1263b1fa07e810", "text": "In this paper, we propose a new fast dehazing method from single image based on filtering. The basic idea is to compute an accurate atmosphere veil that is not only smoother, but also respect with depth information of the underlying image. We firstly obtain an initial atmosphere scattering light through median filtering, then refine it by guided joint bilateral filtering to generate a new atmosphere veil which removes the abundant texture information and recovers the depth edge information. Finally, we solve the scene radiance using the atmosphere attenuation model. Compared with exiting state of the art dehazing methods, our method could get a better dehazing effect at distant scene and places where depth changes abruptly. Our method is fast with linear complexity in the number of pixels of the input image; furthermore, as our method can be performed in parallel, thus it can be further accelerated using GPU, which makes our method applicable for real-time requirement.", "title": "" }, { "docid": "8777063bfba463c05e46704f0ad2c672", "text": "Amazon's Mechanical Turk is an online labor market where requesters post jobs and workers choose which jobs to do for pay. The central purpose of this article is to demonstrate how to use this Web site for conducting behavioral research and to lower the barrier to entry for researchers who could benefit from this platform. We describe general techniques that apply to a variety of types of research and experiments across disciplines. We begin by discussing some of the advantages of doing experiments on Mechanical Turk, such as easy access to a large, stable, and diverse subject pool, the low cost of doing experiments, and faster iteration between developing theory and executing experiments. While other methods of conducting behavioral research may be comparable to or even better than Mechanical Turk on one or more of the axes outlined above, we will show that when taken as a whole Mechanical Turk can be a useful tool for many researchers. We will discuss how the behavior of workers compares with that of experts and laboratory subjects. Then we will illustrate the mechanics of putting a task on Mechanical Turk, including recruiting subjects, executing the task, and reviewing the work that was submitted. We also provide solutions to common problems that a researcher might face when executing their research on this platform, including techniques for conducting synchronous experiments, methods for ensuring high-quality work, how to keep data private, and how to maintain code security.", "title": "" }, { "docid": "1ce7ad15a98b7074ad55a6d7889368b4", "text": "Recognition and extracting various emotions and then validating those emotions from the facial expressions has become important for improving the overall human computer interaction. So this paper also describe about Emotion Recognition Techniques. Emotion Recognition has become a progressive research area and it plays a major role in Human-Computer-Interaction. For any Facial Expression Recognition, it is necessary to extract the features of face that can be possibly used to detect the expression. For Feature Extraction the Principal Component Analysis will be used. A survey of various techniques used in emotion recognition like PCA, LBP etc. is done in this paper and listing their performance. The goal of this paper is to show the comparison with Other Recognition Techniques with different approaches. And also describe the general step of the how to recognize emotion from various facial expression Keyword: -. Emotion Recognition, Facial Expression", "title": "" }, { "docid": "6c76fcf20405c6826060821ac7c662e8", "text": "A perception system for pedestrian detection in urban scenarios using information from a LIDAR and a single camera is presented. Two sensor fusion architectures are described, a centralized and a decentralized one. In the former, the fusion process occurs at the feature level, i.e., features from LIDAR and vision spaces are combined in a single vector for posterior classification using a single classifier. In the latter, two classifiers are employed, one per sensor-feature space, which were offline selected based on information theory and fused by a trainable fusion method applied over the likelihoods provided by the component classifiers. The proposed schemes for sensor combination, and more specifically the trainable fusion method, lead to enhanced detection performance and, in addition, maintenance of false-alarms under tolerable values in comparison with singlebased classifiers. Experimental results highlight the performance and effectiveness of the proposed pedestrian detection system and the related sensor data combination strategies.", "title": "" }, { "docid": "a8d78c6fd0f2f5792d5eaab3ddd577dc", "text": "This paper describes the new Java memory model, which has been revised as part of Java 5.0. The model specifies the legal behaviors for a multithreaded program; it defines the semantics of multithreaded Java programs and partially determines legal implementations of Java virtual machines and compilers.The new Java model provides a simple interface for correctly synchronized programs -- it guarantees sequential consistency to data-race-free programs. Its novel contribution is requiring that the behavior of incorrectly synchronized programs be bounded by a well defined notion of causality. The causality requirement is strong enough to respect the safety and security properties of Java and weak enough to allow standard compiler and hardware optimizations. To our knowledge, other models are either too weak because they do not provide for sufficient safety/security, or are too strong because they rely on a strong notion of data and control dependences that precludes some standard compiler transformations.Although the majority of what is currently done in compilers is legal, the new model introduces significant differences, and clearly defines the boundaries of legal transformations. For example, the commonly accepted definition for control dependence is incorrect for Java, and transformations based on it may be invalid.In addition to providing the official memory model for Java, we believe the model described here could prove to be a useful basis for other programming languages that currently lack well-defined models, such as C++ and C#.", "title": "" }, { "docid": "6b16d2baafca3b8e479d9b34bd3f1ea7", "text": "Companies are increasingly allocating more of their marketing spending to social media programs. Yet there is little research about how social media use is associated with consumer–brand relationships. We conducted three studies to explore how individual and national differences influence the relationship between social media use and customer brand relationships. The first study surveyed customers in France, the U.K. and U.S. and compared those who engage with their favorite brands via social media with those who do not. The findings indicated that social media use was positively related with brand relationship quality and the effect was more pronounced with high anthropomorphism perceptions (the extent to which consumers' associate human characteristics with brands). Two subsequent experiments further validated these findings and confirmed that cultural differences, specifically uncertainty avoidance, moderated these results. We obtained robust and convergent results from survey and experimental data using both student and adult consumer samples and testing across three product categories (athletic shoes, notebook computers, and automobiles). The results offer cross-national support for the proposition that engaging customers via social media is associated with higher consumer–brand relationships and word of mouth communications when consumers anthropomorphize the brand and they avoid uncertainty. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2a7406d0b2ce795bb09b042497680b33", "text": "In-memory databases require careful tuning and many engineering tricks to achieve good performance. Such database performance engineering is hard: a plethora of data and hardware-dependent optimization techniques form a design space that is difficult to navigate for a skilled engineer – even more so for a query compiler. To facilitate performanceoriented design exploration and query plan compilation, we present Voodoo, a declarative intermediate algebra that abstracts the detailed architectural properties of the hardware, such as multior many-core architectures, caches and SIMD registers, without losing the ability to generate highly tuned code. Because it consists of a collection of declarative, vector-oriented operations, Voodoo is easier to reason about and tune than low-level C and related hardware-focused extensions (Intrinsics, OpenCL, CUDA, etc.). This enables our Voodoo compiler to produce (OpenCL) code that rivals and even outperforms the fastest state-of-the-art in memory databases for both GPUs and CPUs. In addition, Voodoo makes it possible to express techniques as diverse as cacheconscious processing, predication and vectorization (again on both GPUs and CPUs) with just a few lines of code. Central to our approach is a novel idea we termed control vectors, which allows a code generating frontend to expose parallelism to the Voodoo compiler in a abstract manner, enabling portable performance across hardware platforms. We used Voodoo to build an alternative backend for MonetDB, a popular open-source in-memory database. Our backend allows MonetDB to perform at the same level as highly tuned in-memory databases, including HyPeR and Ocelot. We also demonstrate Voodoo’s usefulness when investigating hardware conscious tuning techniques, assessing their performance on different queries, devices and data.", "title": "" }, { "docid": "5f51d56c23a89d514294e4708c9f4445", "text": "Defect prediction techniques could potentially help us to focus quality-assurance efforts on the most defect-prone files. Modern statistical tools make it very easy to quickly build and deploy prediction models. Software metrics are at the heart of prediction models; understanding how and especially why different types of metrics are effective is very important for successful model deployment. In this paper we analyze the applicability and efficacy of process and code metrics from several different perspectives. We build many prediction models across 85 releases of 12 large open source projects to address the performance, stability, portability and stasis of different sets of metrics. Our results suggest that code metrics, despite widespread use in the defect prediction literature, are generally less useful than process metrics for prediction. Second, we find that code metrics have high stasis; they dont change very much from release to release. This leads to stagnation in the prediction models, leading to the same files being repeatedly predicted as defective; unfortunately, these recurringly defective files turn out to be comparatively less defect-dense.", "title": "" }, { "docid": "78fa87e54c9f6c49101e0079013792e2", "text": "The NCSM Journal of Mathematics Education Leadership is published at least twice yearly, in the spring and fall. Permission to photocopy material from the NCSM Journal of Mathematics Education Leadership is granted for instructional use when the material is to be distributed free of charge (or at cost only), provided that it is duplicated with the full credit given to the authors of the materials and the NCSM Journal of Mathematics Education Leadership. This permission does not apply to copyrighted articles reprinted in the NCSM Journal of Mathematics Education Leadership. The editors of the NCSM Journal of Mathematics Education Leadership are interested in manuscripts that address concerns of leadership in mathematics rather than those of content or delivery. Editors are interested in publishing articles from a broad spectrum of formal and informal leaders who practice at local, regional, national, and international levels. Categories for submittal include: Note: The last two categories are intended for short pieces of 2 to 3 pages in length. Submittal of items should be done electronically to the Journal editor. Do not put any author identification in the body of the item being submitted, but do include author information as you would like to see it in the Journal. Items submitted for publication will be reviewed by two members of the NCSM Review Panel and one editor with comments and suggested revisions sent back to the author at least six weeks before publication. Final copy must be agreed to at least three weeks before publication. Cover image: A spiral vortex generated with fractal algorithms • Strengthening mathematics education leadership through the dissemination of knowledge related to research, issues, trends, programs, policy, and practice in mathematics education • Fostering inquiry into key challenges of mathematics education leadership • Raising awareness about key challenges of mathematics education leadership, in order to influence research, programs, policy, and practice • Engaging the attention and support of other education stakeholders, and business and government, in order to broaden as well as strengthen mathematics education leadership E arlier this year, NCSM released a new mission and vision statement. Our mission speaks to our commitment to \" support and sustain improved student achievement through the development of leadership skills and relationships among current and future mathematics leaders. \" Our vision statement challenges us as the leaders in mathematics education to collaborate with all stakeholders and develop leadership skills that will lead to improved …", "title": "" }, { "docid": "f19057578e0fce86e57d762d5805e676", "text": "A polymer network of intranuclear lamin filaments underlies the nuclear envelope and provides mechanical stability to the nucleus in metazoans. Recent work demonstrates that the expression of A-type lamins scales positively with the stiffness of the cellular environment, thereby coupling nuclear and extracellular mechanics. Using the spectrin-actin network at the erythrocyte plasma membrane as a model, we contemplate how the relative stiffness of the nuclear scaffold impinges on the growing number of interphase-specific nuclear envelope remodeling events, including recently discovered, nuclear envelope-specialized quality control mechanisms. We suggest that a stiffer lamina impedes these remodeling events, necessitating local lamina remodeling and/or concomitant scaling of the efficacy of membrane-remodeling machineries that act at the nuclear envelope.", "title": "" }, { "docid": "d2a04795fa95d2534b000dbf211cd4b9", "text": "Tracking multiple targets is a challenging problem, especially when the targets are “identical”, in the sense that the same model is used to describe each target. In this case, simply instantiating several independent 1-body trackers is not an adequate solution, because the independent trackers tend to coalesce onto the best-fitting target. This paper presents an observation density for tracking which solves this problem by exhibiting a probabilistic exclusion principle. Exclusion arises naturally from a systematic derivation of the observation density, without relying on heuristics. Another important contribution of the paper is the presentation of partitioned sampling, a new sampling method for multiple object tracking. Partitioned sampling avoids the high computational load associated with fully coupled trackers, while retaining the desirable properties of coupling.", "title": "" }, { "docid": "f1a5c64dae0b41324ffeef568769e6e5", "text": "Media content has become the major traffic of Internet and will keep on increasing rapidly. Various innovative media applications, services, devices have emerged and people tend to consume more media contents. We are meeting a media revolution. But media processing requires great capacity and capability of computing resources. Meanwhile cloud computing has emerged as a prosperous technology and the cloud computing platform has become a fundamental facility providing various services, great computing power, massive storage and bandwidth with modest cost. The integration of cloud computing and media processing is therefore a natural choice for both of them, and hence comes forth the media cloud. In this paper we make a comprehensive overview on the recent media cloud research work. We first discuss the challenges of the media cloud, and then summarize its architecture, the processing, and its storage and delivery mechanisms. As the result, we propose a new architecture for the media cloud. At the end of this paper, we make suggestions on how to build a media cloud and propose several future research topics as the conclusion.", "title": "" }, { "docid": "2997fc35a86646d8a43c16217fc8079b", "text": "During sudden onset crisis events, the presence of spam, rumors and fake content on Twitter reduces the value of information contained on its messages (or “tweets”). A possible solution to this problem is to use machine learning to automatically evaluate the credibility of a tweet, i.e. whether a person would deem the tweet believable or trustworthy. This has been often framed and studied as a supervised classification problem in an off-line (post-hoc) setting. In this paper, we present a semi-supervised ranking model for scoring tweets according to their credibility. This model is used in TweetCred , a real-time system that assigns a credibility score to tweets in a user’s timeline. TweetCred , available as a browser plug-in, was installed and used by 1,127 Twitter users within a span of three months. During this period, the credibility score for about 5.4 million tweets was computed, allowing us to evaluate TweetCred in terms of response time, effectiveness and usability. To the best of our knowledge, this is the first research work to develop a real-time system for credibility on Twitter, and to evaluate it on a user base of this size.", "title": "" }, { "docid": "b48d9e46a22fce04dac6949b08a7673c", "text": "Khadtare Y, Chaudhari A, Waghmare P, Prashant S. (laser-assisted new attachment procedure) The LANAP Protocol A Minimally Invasive Bladeless Procedure. J Periodontol Med Clin Pract 2014;01: 264-271 1 2 2 3 Dr. Yogesh Khadtare , Dr. Amit Chaudhari , Dr. Pramod Waghmare , Dr. Shekhar Prashant Review Article Journal of Periodontal Medicine & Clinical Practice JPMCP Journal of Periodontal Medicine & Clinical Practice", "title": "" } ]
scidocsrr
85a324047d1cc06703dadf472ae81b2b
Valence, arousal and dominance in the EEG during game play
[ { "docid": "b0a1cdf37eb1d78262ed663974a36793", "text": "OBJECTIVE\nThe present study aimed at examining the time course and topography of oscillatory brain activity and event-related potentials (ERPs) in response to laterally presented affective pictures.\n\n\nMETHODS\nElectroencephalography was recorded from 129 electrodes in 10 healthy university students during presentation of pictures from the international affective picture system. Frequency measures and ERPs were obtained for pleasant, neutral, and unpleasant pictures.\n\n\nRESULTS\nIn accordance with previous reports, a modulation of the late positive ERP wave at parietal recording sites was found as a function of emotional arousal. Early mid gamma band activity (GBA; 30-45 Hz) at 80 ms post-stimulus was enhanced in response to aversive stimuli only, whereas the higher GBA (46-65 Hz) at 500 ms showed an enhancement of arousing, compared to neutral pictures. ERP and late gamma effects showed a pronounced right-hemisphere preponderance, but differed in terms of topographical distribution.\n\n\nCONCLUSIONS\nLate gamma activity may represent a correlate of widespread cortical networks processing different aspects of emotionally arousing visual objects. In contrast, differences between affective categories in early gamma activity might reflect fast detection of aversive stimulus features.", "title": "" }, { "docid": "a13bba294d712bbf5d99a7db5369cb0c", "text": "The present study was designed to test differential hemispheric activation induced by emotional stimuli in the gamma band range (30-90 Hz). Subjects viewed slides with differing emotional content (from the International Affective Picture System). A significant valence by hemisphere interaction emerged in the gamma band from 30-50 Hz. Other bands, including alpha and beta, did not show such an interaction. Previous hypotheses suggested that the left hemisphere is more involved in positive affective processing as compared to the right hemisphere, while the latter dominates during negative emotions. Contrary to this expectation, the 30-50 Hz band showed relatively more power for negative valence over the left temporal region as compared to the right and a laterality shift towards the right hemisphere for positive valence. In addition, emotional processing enhanced gamma band power at right frontal electrodes regardless of the particular valence as compared to processing neutral pictures. The extended distribution of specific activity in the gamma band may be the signature of cell assemblies with members in limbic, temporal and frontal neocortical structures that differ in spatial distribution depending on the particular type of emotional processing.", "title": "" } ]
[ { "docid": "bf1f9f28d7077909851c41eaed31e0db", "text": "Often the best performing supervised learning models are ensembles of hundreds or thousands of base-level classifiers. Unfortunately, the space required to store this many classifiers, and the time required to execute them at run-time, prohibits their use in applications where test sets are large (e.g. Google), where storage space is at a premium (e.g. PDAs), and where computational power is limited (e.g. hea-ring aids). We present a method for \"compressing\" large, complex ensembles into smaller, faster models, usually without significant loss in performance.", "title": "" }, { "docid": "0eb659fd66ad677f90019f7214aae7e8", "text": "In this article a relational database schema for a bibliometric database is developed. After the introduction explaining the motivation to use relational databases in bibliometrics, an overview of the related literature is given. A review of typical bibliometric questions serves as an informal requirement analysis. The database schema is developed as an entity-relationship diagram using the structural information typically found in scientific articles. Several SQL queries for the tasks presented in the requirement analysis show the usefulness of the developed database schema.", "title": "" }, { "docid": "6f3938e2951996d4f41a5fa6e8c71aad", "text": "Online Social Networks (OSNs), such as Facebook and Twitter, have become an integral part of our daily lives. There are hundreds of OSNs, each with its own focus in that each offers particular services and functionalities. Recent studies show that many OSN users create several accounts on multiple OSNs using the same or different personal information. Collecting all the available data of an individual from several OSNs and fusing it into a single profile can be useful for many purposes. In this paper, we introduce novel machine learning based methods for solving Entity Resolution (ER), a problem for matching user profiles across multiple OSNs. The presented methods are able to match between two user profiles from two different OSNs based on supervised learning techniques, which use features extracted from each one of the user profiles. By using the extracted features and supervised learning techniques, we developed classifiers which can perform entity matching between two profiles for the following scenarios: (a) matching entities across two OSNs; (b) searching for a user by similar name; and (c) de-anonymizing a user’s identity. The constructed classifiers were tested by using data collected from two popular OSNs, Facebook and Xing. We then evaluated the classifiers’ performances using various evaluation measures, such as true and false positive rates, accuracy, and the Area Under the receiver operator Curve (AUC). The constructed classifiers were evaluated and their classification performance measured by AUC was quite remarkable, with an AUC of up to 0.982 and an accuracy of up to 95.9% in identifying user profiles across two OSNs.", "title": "" }, { "docid": "f5964bb7a6bca95fae0c3f923d3165fc", "text": "The growing number of storage security breaches as well as the need to adhere to government regulations is driving the need for greater storage protection. However, there is the lack of a comprehensive process to designing storage protection solutions. Designing protection for storage systems is best done by utilizing proactive system engineering rather than reacting with ad hoc countermeasures to the latest attack du jour. The purpose of threat modeling is to organize system threats and vulnerabilities into general classes to be addressed with known storage protection techniques. Although there has been prior work on threat modeling primarily for software applications, to our knowledge this is the first attempt at domain-specific threat modeling for storage systems. We discuss protection challenges unique to storage systems and propose two different processes to creating a threat model for storage systems: one based on classical security principles Confidentiality, Integrity, Availability, Authentication, or CIAA) and another based on the Data Lifecycle Model. It is our hope that this initial work will start a discussion on how to better design and implement storage protection solutions against storage threats.", "title": "" }, { "docid": "a9d94467bbcb01a84c84fa5c8981076f", "text": "Gavilea australis is a terrestrial orchid endemic from insular south Argentina and Chile. Meeting aspects of mycorrhizal fungi identity and compatibility in this orchid species is essential for propagation and conservation purposes. These knowledge represent also a first approach to elucidate the mycorrhizal specificity of this species. In order to evaluate both the mycorrhizal compatibility and the symbiotic seed germination of G. australis, we isolated and identified its root endophytic fungal strains as well as those from two sympatric species: Gavilea lutea and Codonorchis lessonii. In addition, we tested two other strains isolated from allopatric terrestrial orchid species from central Argentina. All fungal strains formed coilings and pelotons inside protocorms and promoted, at varying degrees, seed germination, and protocorm development until seedlings had two to three leaves. These results suggest a low mycorrhizal specificity of G. australis and contribute to a better knowledge of the biology of this orchid as well as of other sympatric Patagonian orchid species, all of them currently under serious risk of extinction.", "title": "" }, { "docid": "4545a74d04769f6b251da9da7b357d09", "text": "Despite a long history of research and debate, there is still no standard definition of intelligence. This has lead some to believe that intelligence may be approximately described, but cannot be fully defined. We believe that this degree of pessimism is too strong. Although there is no single standard definition, if one surveys the many definitions that have been proposed, strong similarities between many of the definitions quickly become obvious. In many cases different definitions, suitably interpreted, actually say the same thing but in different words. This observation lead us to believe that a single general and encompassing definition for arbitrary systems was possible. Indeed we have constructed a formal definition of intelligence, called universal intelligence [21], which has strong connections to the theory of optimal learning agents [19]. Rather than exploring very general formal definitions of intelligence, here we will instead take the opportunity to present the many informal definitions that we have collected over the years. Naturally, compiling a complete list would be impossible as many definitions of intelligence are buried deep inside articles and books. Nevertheless, the 70 odd definitions presented below are, to the best of our knowledge, the largest and most well referenced collection there is. We continue to add to this collect as we discover further definitions, and keep the most up to date version of the collection available online [22]. If you know of additional definitions that we could add, please send us an email.", "title": "" }, { "docid": "74dd6f8fbc0469757d00e95b0aeeab65", "text": "To date, no short scale exists with strong psychometric properties that can assess problematic pornography consumption based on an overarching theoretical background. The goal of the present study was to develop a brief scale, the Problematic Pornography Consumption Scale (PPCS), based on Griffiths's (2005) six-component addiction model that can distinguish between nonproblematic and problematic pornography use. The PPCS was developed using an online sample of 772 respondents (390 females, 382 males; Mage = 22.56, SD = 4.98 years). Creation of items was based on previous problematic pornography use instruments and on the definitions of factors in Griffiths's model. A confirmatory factor analysis (CFA) was carried out-because the scale is based on a well-established theoretical model-leading to an 18-item second-order factor structure. The reliability of the PPCS was excellent, and measurement invariance was established. In the current sample, 3.6% of the users belonged to the at-risk group. Based on sensitivity and specificity analyses, we identified an optimal cutoff to distinguish between problematic and nonproblematic pornography users. The PPCS is a multidimensional scale of problematic pornography use with a strong theoretical basis that also has strong psychometric properties in terms of factor structure and reliability.", "title": "" }, { "docid": "c021904cff1cbef8ab62cc3fe0502a7e", "text": "Light-emitting diodes (LEDs), which will be increasingly used in lighting technology, will also allow for distribution of broadband optical wireless signals. Visible-light communication (VLC) using white LEDs offers several advantages over the RF-based wireless systems, i.e., license-free spectrum, low power consumption, and higher privacy. Mostly, optical wireless can provide much higher data rates. In this paper, we demonstrate a VLC system based on a white LED for indoor broadband wireless access. After investigating the nonlinear effects of the LED and the power amplifier, a data rate of 1 Gb/s has been achieved at the standard illuminance level, by using an optimized discrete multitone modulation technique and adaptive bit- and power-loading algorithms. The bit-error ratio of the received data was $1.5\\cdot 10^{-3}$, which is within the limit of common forward error correction (FEC) coding. These results twice the highest capacity that had been previously obtained.", "title": "" }, { "docid": "3041c6026ea9e6bd0d7b80e99d925e31", "text": "According to the cross-border e-commerce background, the article is analyzed its operation on the cross-border e-commerce logistics in china. Firstly, this paper illustrates the operation characteristics of cross-border e-commerce logistics, then analyzes some aspects of the cross-border e-commerce logistics, like operations, logistics cost management and so on. Secondly, this paper analyzes existing problems in cross-border e-commerce logistics from the development of electronic commerce logistics cross-border in China. Finally, some suggestions were put forward on cross-border e-commerce logistics operation from the two aspects of macro level of cross-border e-commerce and micro level of cross-border e-commerce enterprise.", "title": "" }, { "docid": "f3bc3e8c34574be5db727acc1aa72e64", "text": "In this paper we investigate possible ways to improve the energy efficiency of a general purpose microprocessor. We show that the energy of a processor depends on its performance, so we chose the energy-delay product to compare different processors. To improve the energy-delay product we explore methods of reducing energy consumption that do not lead to performance loss (i.e., wasted energy), and explore methods to reduce delay by exploiting instruction level parallelism. We found that careful design reduced the energy dissipation by almost 25%. Pipelining can give approximately a 2x improvement in energydelay product. Superscalar issue, however, does not improve the energy-delay product any further since the overhead required offsets the gains in performance. Further improvements will be hard to come by since a large fraction of the energy (5040%) is dissipated in the clock network and the on-chip memories. Thus, the efficiency of processors will depend more on the technology being used and the algorithm chosen by the programmer than the micro-architecture.", "title": "" }, { "docid": "15b05bdc1310d038110b545686082c98", "text": "The class of materials combining high electrical or thermal conductivity, optical transparency and flexibility is crucial for the development of many future electronic and optoelectronic devices. Silver nanowire networks show very promising results and represent a viable alternative to the commonly used, scarce and brittle indium tin oxide. The science and technology research of such networks are reviewed to provide a better understanding of the physical and chemical properties of this nanowire-based material while opening attractive new applications.", "title": "" }, { "docid": "ae73bdfbfe949201036f00820f20a086", "text": "Increasing efficiency by improving locomotion methods is a key issue for underwater robots. Moreover, a number of different control design challenges must be solved to realize operational swimming robots for underwater tasks. This article proposes and experimentally validates a straightline-path-following controller for biologically inspired swimming snake robots. In particular, a line-of-sight (LOS) guidance law is presented, which is combined with a sinusoidal gait pattern and a directional controller that steers the robot toward and along the desired path. The performance of the path-following controller is investigated through experiments with a physical underwater snake robot for both lateral undulation and eel-like motion. In addition, fluid parameter identification is performed, and simulation results based on the identified fluid coefficients are presented to obtain a back-to-back comparison with the motion of the physical robot during the experiments. The experimental results show that the proposed control strategy successfully steers the robot toward and along the desired path for both lateral undulation and eel-like motion patterns.", "title": "" }, { "docid": "ef74392a9681d16b14970740cbf85191", "text": "We propose an efficient physics-based method for dexterous ‘real hand’ - ‘virtual object’ interaction in Virtual Reality environments. Our method is based on the Coulomb friction model, and we show how to efficiently implement it in a commodity VR engine for realtime performance. This model enables very convincing simulations of many types of actions such as pushing, pulling, grasping, or even dexterous manipulations such as spinning objects between fingers without restrictions on the objects' shapes or hand poses. Because it is an analytic model, we do not require any prerecorded data, in contrast to previous methods. For the evaluation of our method, we conduction a pilot study that shows that our method is perceived more realistic and natural, and allows for more diverse interactions. Further, we evaluate the computational complexity of our method to show real-time performance in VR environments.", "title": "" }, { "docid": "d76b4c234b72e0bf8615f224d5281e66", "text": "Data centers are the heart of the global economy. In the mid-1990s, the costs of these large computing facilities were dominated by the costs of the information technology (IT) equipment that they housed, but no longer. As the electrical power used by IT equipment per dollar of equipment cost has increased, the annualized facility costs associated with powering and cooling IT equipment has in some cases grown to equal the annualized capital costs of the IT equipment itself. The trend towards ever more electricity-intensive IT equipment continues, which means that direct IT equipment acquisition costs will be a less important determinant of the economics of computing services in the future. Consider Figure ES-1, which shows the importance of different data center cost components as a function of power use per thousand dollars of server cost. If power per server cost continues to increase, the indirect power-related infrastructure costs will soon exceed the annualized direct cost of purchasing the IT equipment in the data center. Ken Brill of the Uptime Institute has called these trends \" the economic breakdown of Moore's Law \" , highlighting the growing importance of power-related indirect costs to the overall economics of information technology. The industry has in general assumed that the cost reductions and growth in computing speed related to Moore's law would continue unabated for years to come, and this may be true at the level of individual server systems. Unfortunately, far too little attention has been paid to the true total costs for data center facilities, in which the power-related indirect costs threaten to slow the cost reductions from Moore's law. These trends have important implications for the design, construction and operation of data centers. The companies delivering so-called \" cloud computing \" services have been aware of these economic trends for years, though the sophistication of their responses to them has varied. Most other companies that own data centers, for which computing is not their core business, have significantly lagged behind the vertically organized large-scale computing providers in addressing these issues. There are technical solutions for improving data center efficiency but the most important and most neglected solutions relate to institutional changes that can help companies focus on reducing the total costs of computing services. The first steps, of course, are to measure costs in a comprehensive way, eliminate institutional impediments, and reward those who successfully reduce these costs. This article assesses …", "title": "" }, { "docid": "dbdda952c63b7b7a4f8ce68f806e5238", "text": "This paper examines how real-time information gathered as part of intelligent transportation systems can be used to predict link travel times for one through five time periods ahead (of 5-min duration). The study employed a spectral basis artificial neural network (SNN) that utilizes a sinusoidal transformation technique to increase the linear separability of the input features. Link travel times from Houston that had been collected as part of the automatic vehicle identification system of the TranStar system were used as a test bed. It was found that the SNN outperformed a conventional artificial neural network and gave similar results to that of modular neural networks. However, the SNN requires significantly less effort on the part of the modeler than modular neural networks. The results of the best SNN were compared with conventional link travel time prediction techniques including a Kalman filtering model, exponential smoothing model, historical profile, and realtime profile. It was found that the SNN gave the best overall results.", "title": "" }, { "docid": "266625d5f1c658849d34514d5dc9586f", "text": "Hand written digit recognition is highly nonlinear problem. Recognition of handwritten numerals plays an active role in day to day life now days. Office automation, e-governors and many other areas, reading printed or handwritten documents and convert them to digital media is very crucial and time consuming task. So the system should be designed in such a way that it should be capable of reading handwritten numerals and provide appropriate response as humans do. However, handwritten digits are varying from person to person because each one has their own style of writing, means the same digit or character/word written by different writer will be different even in different languages. This paper presents survey on handwritten digit recognition systems with recent techniques, with three well known classifiers namely MLP, SVM and k-NN used for classification. This paper presents comparative analysis that describes recent methods and helps to find future scope.", "title": "" }, { "docid": "499d11cefeb1b086f4749310de71385f", "text": "Non-volatile RAM (NVRAM) will fundamentally change in-memory databases as data structures do not have to be explicitly backed up to hard drives or SSDs, but can be inherently persistent in main memory. To guarantee consistency even in the case of power failures, programmers need to ensure that data is flushed from volatile CPU caches where it would be susceptible to power outages to NVRAM.\n In this paper, we present the NVC-Hashmap, a lock-free hashmap that is used for unordered dictionaries and delta indices in in-memory databases. The NVC-Hashmap is then evaluated in both stand-alone and integrated database benchmarks and compared to a B+-Tree based persistent data structure.", "title": "" }, { "docid": "1651591161e940a55a41295aa05fc9a7", "text": "We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences. Rather than plugging a consistent density estimate (which requires k → ∞ as the sample size n → ∞) into the functional of interest, the estimators we consider fix k and perform a bias correction. This is more efficient computationally, and, as we show in certain cases, statistically, leading to faster convergence rates. Our framework unifies several previous estimators, for most of which ours are the first finite sample guarantees.", "title": "" }, { "docid": "8baa6af3ee08029f0a555e4f4db4e218", "text": "We introduce several probabilistic models for learning the lexicon of a semantic parser. Lexicon learning is the first step of training a semantic parser for a new application domain and the quality of the learned lexicon significantly affects both the accuracy and efficiency of the final semantic parser. Existing work on lexicon learning has focused on heuristic methods that lack convergence guarantees and require significant human input in the form of lexicon templates or annotated logical forms. In contrast, our probabilistic models are trained directly from question/answer pairs using EM and our simplest model has a concave objective that guarantees convergence to a global optimum. An experimental evaluation on a set of 4th grade science questions demonstrates that our models improve semantic parser accuracy (35-70% error reduction) and efficiency (4-25x more sentences per second) relative to prior work despite using less human input. Our models also obtain competitive results on GEO880 without any datasetspecific engineering.", "title": "" }, { "docid": "a30de4a213fe05c606fb16d204b9b170", "text": "– The recent work on cross-country regressions can be compared to looking at “a black cat in a dark room”. Whether or not all this work has accomplished anything on the substantive economic issues is a moot question. But the search for “a black cat ” has led to some progress on the econometric front. The purpose of this paper is to comment on this progress. We discuss the problems with the use of cross-country panel data in the context of two problems: The analysis of economic growth and that of the purchasing power parity (PPP) theory. A propos de l’emploi des méthodes de panel sur des données inter-pays RÉSUMÉ. – Les travaux récents utilisant des régressions inter-pays peuvent être comparés à la recherche d'« un chat noir dans une pièce sans lumière ». La question de savoir si ces travaux ont apporté quelque chose de significatif à la connaissance économique est assez controversée. Mais la recherche du « chat noir » a conduit à quelques progrès en économétrie. L'objet de cet article est de discuter de ces progrès. Les problèmes posés par l'utilisation de panels de pays sont discutés dans deux contextes : celui de la croissance économique et de la convergence d'une part ; celui de la théorie de la parité des pouvoirs d'achat d'autre part. * G.S. MADDALA: Department of Economics, The Ohio State University. I would like to thank M. NERLOVE, P. SEVESTRE and an anonymous referee for helpful comments. Responsability for the omissions and any errors is my own. ANNALES D’ÉCONOMIE ET DE STATISTIQUE. – N° 55-56 – 1999 « The Gods love the obscure and hate the obvious » BRIHADARANYAKA UPANISHAD", "title": "" } ]
scidocsrr
93137284f6d6e4fc26d18557337ce0a3
Topic Models Conditioned on Arbitrary Features with Dirichlet-multinomial Regression
[ { "docid": "6b855b55f22de3e3f65ce56a69c35876", "text": "This paper presents an LDA-style topic model that captures not only the low-dimensional structure of data, but also how the structure changes over time. Unlike other recent work that relies on Markov assumptions or discretization of time, here each topic is associated with a continuous distribution over timestamps, and for each generated document, the mixture distribution over topics is influenced by both word co-occurrences and the document's timestamp. Thus, the meaning of a particular topic can be relied upon as constant, but the topics' occurrence and correlations change significantly over time. We present results on nine months of personal email, 17 years of NIPS research papers and over 200 years of presidential state-of-the-union addresses, showing improved topics, better timestamp prediction, and interpretable trends.", "title": "" } ]
[ { "docid": "ff36b5154e0b85faff09a5acbb39bb0a", "text": "During a frequent survey in the northwest Indian Himalayan region, a new species-Cordyceps macleodganensis-was encountered. This species is described on the basis of its macromorphological features, microscopic details, and internal transcribed spacer sequencing. This species showed only 90% resemblance to Cordyceps gracilis. The chemical composition of the mycelium showed protein (14.95 ± 0.2%) and carbohydrates (59.21 ± 3.8%) as the major nutrients. This species showed appreciable amounts of P-carotene, lycopene, phenolic compounds, polysaccharides, and flavonoids. Mycelial culture of this species showed higher effectiveness for ferric-reducing antioxidant power, DPPH radical scavenging activity, ferrous ion-chelating activity, and scavenging ability on superoxide anion-derived radicals, calculated by half-maximal effective concentrations.", "title": "" }, { "docid": "5e90e620455442838c1f5ba287b81de6", "text": "In this paper, we address the problem of combining automatic lane-keeping and driver's steering for either obstacle avoidance or lane-change maneuvers for passing purposes or any other desired maneuvers, through a closed-loop control strategy. The automatic lane-keeping control loop is never opened, and no on/off switching strategy is used. During the driver's maneuver, the vehicle lateral dynamics are controlled by the driver himself through the vehicle steering system. When there is no driver's steering action, the vehicle center of gravity tracks the center of the traveling lane thanks to the automatic lane-keeping system. At the beginning (end) of the maneuver, the lane-keeping task is released (resumed) safely and smoothly. The performance of the proposed closed-loop structure is shown both by means of simulations and through experimental results obtained along Italian highways.", "title": "" }, { "docid": "e17284a2cfff3f9d1ad6c471acadc553", "text": "Baby-Led Weaning (BLW) is an alternative method for introducing complementary foods to infants in which the infant feeds themselves hand-held foods instead of being spoon-fed by an adult. The BLW infant also shares family food and mealtimes and is offered milk (ideally breast milk) on demand until they self-wean. Anecdotal evidence suggests that many parents are choosing this method instead of conventional spoon-feeding of purées. Observational studies suggest that BLW may encourage improved eating patterns and lead to a healthier body weight, although it is not yet clear whether these associations are causal. This review evaluates the literature with respect to the prerequisites for BLW, which we have defined as beginning complementary foods at six months (for safety reasons), and exclusive breastfeeding to six months (to align with WHO infant feeding guidelines); the gross and oral motor skills required for successful and safe self-feeding of whole foods from six months; and the practicalities of family meals and continued breastfeeding on demand. Baby-Led Weaning will not suit all infants and families, but it is probably achievable for most. However, ultimately, the feasibility of BLW as an approach to infant feeding can only be determined in a randomized controlled trial. Given the popularity of BLW amongst parents, such a study is urgently needed.", "title": "" }, { "docid": "e7fd9e94a30d1f79d02825683bcfe10f", "text": "Gigantomastia is relatively rare and mostly unknown manifestation in its diagnostic and therapeutic approach. It is composed by many categories (idiopathic, Juvenile, pregnancy, Medication) that can affect women with strict profile. We report the case of a very important idiopathic gigantomastia which was operated using a technique with superior pedicle with resection of 5kg per breast. The evolution was marked by the occurrence of recurrence at 18 months. Through, the analysis of this observation and review of the literature, the authors review the different aspects of this pathology.", "title": "" }, { "docid": "5edc557fbcf1d9a91560739058274900", "text": "A number of technological advances have led to a renewed interest on dynamic vehicle routing problems. This survey classifies routing problems from the perspective of information quality and evolution. After presenting a general description of dynamic routing, we introduce the notion of degree of dynamism, and present a comprehensive review of applications and solution methods for dynamic vehicle routing problems. ∗Corresponding author: gueret@mines-nantes.fr", "title": "" }, { "docid": "15f51cbbb75d236a5669f613855312e0", "text": "The recent work of Gatys et al., who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass. While generator networks are promising, they are still inferior in visual quality and diversity compared to generation-by-optimization. In this work, we advance them in two significant ways. First, we introduce an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization. Second, we improve diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble, which is the equivalence class of all images characterized by certain filter responses. Together, these two improvements take feed forward texture synthesis and image stylization much closer to the quality of generation-via-optimization, while retaining the speed advantage.", "title": "" }, { "docid": "74f8127bc620fa1c9797d43dedea4d45", "text": "A novel system for long-term tracking of a human face in unconstrained videos is built on Tracking-Learning-Detection (TLD) approach. The system extends TLD with the concept of a generic detector and a validator which is designed for real-time face tracking resistent to occlusions and appearance changes. The off-line trained detector localizes frontal faces and the online trained validator decides which faces correspond to the tracked subject. Several strategies for building the validator during tracking are quantitatively evaluated. The system is validated on a sitcom episode (23 min.) and a surveillance (8 min.) video. In both cases the system detects-tracks the face and automatically learns a multi-view model from a single frontal example and an unlabeled video.", "title": "" }, { "docid": "8bfb4ce78cdff8572c28dbb61d120c84", "text": "B.P.C. Kreukels et al. (eds.), Gender Dysphoria and Disorders of Sex Development: Progress in Care and Knowledge, Focus on Sexuality Research, DOI 10.1007/978-1-4614-7441-8_2, © Springer Science+Business Media New York 2014 Abstract The development of gender identity, its variance, and gender dysphoria is thought to be a complex process involving biological and psychosocial factors. Heritability studies have demonstrated a genetic factor for the development of gender dysphoria. The brain is regarded as the anatomical substrate of gender identity, and sex differences of the brain are studied to elucidate the process of gender identity development. Many sex differences have been attributed to hormonal action, and the fi rst genetic studies in transsexuals were focused on sex-steroid-related genes. To this day, a convincing candidate gene has not been identifi ed, and it is now known that sex chromosomes have a direct effect on sex differentiation and that they may play a role in gender identity development. For future studies of the genetic base of gender dysphoria, new techniques, such as genome-wide studies, have become available. In addition, epigenetic studies may provide for a different association perspective of the genetics of gender dysphoria. Chapter 2 Genetic Aspects of Gender Identity Development and Gender Dysphoria", "title": "" }, { "docid": "a021d3c709b1684bb5e95d221e0806bf", "text": "Trace element determination in seawater is analytically challenging due to the typically very low concentrations of the trace elements and the potential interference of the salt matrix. A common way to address the challenge is to pre-concentrate the trace elements on a chelating resin, then rinse the matrix elements from the resin and subsequently elute and detect the trace elements using inductively coupled plasma mass spectrometry (ICP-MS). This technique typically involves time-consuming pre-treatment of the samples for 'off-line' analyses or complicated sample introduction systems involving several pumps and valves for 'on-line' analyses. As an alternative, the following method offers a simple method for 'on-line' analyses of seawater by ICP-MS. As opposed to previous methods, excess seawater was pumped through the nebulizer of the ICP-MS during the pre-concentration step but the gas flow was adjusted so that the seawater was pumped out as waste without being sprayed into the instrument. Advantages of the method include: •Simple and convenient analyses of seawater requiring no changes to the 'standard' sample introduction system except from a resin-filled micro-column connected to the sample tube. The 'standard' sample introduction system refers to that used for routine digest-solution analyses of biota and sediment by ICP-MS using only one peristaltic pump; and•Accurate determination of the elements V, Mn, Co, Ni, Cu, Zn, Cd and Pb in a range of different seawater matrices verified by participation in 6 successive rounds of the international laboratory intercalibration program QUASIMEME.", "title": "" }, { "docid": "1364758783c75a39112d01db7e7cfc63", "text": "Steganography plays an important role in secret communication in digital worlds and open environments like Internet. Undetectability and imperceptibility of confidential data are major challenges of steganography methods. This article presents a secure steganography method in frequency domain based on partitioning approach. The cover image is partitioned into 8×8 blocks and then integer wavelet transform through lifting scheme is performed for each block. The symmetric RC4 encryption method is applied to secret message to obtain high security and authentication. Tree Scan Order is performed in frequency domain to find proper location for embedding secret message. Secret message is embedded in cover image with minimal degrading of the quality. Experimental results demonstrate that the proposed method has achieved superior performance in terms of high imperceptibility of stego-image and it is secure against statistical attack in comparison with existing methods.", "title": "" }, { "docid": "e8638ac34f416ac74e8e77cdc206ef04", "text": "The modular multilevel converter (M2C) has become an increasingly important topology in medium- and high-voltage applications. A limitation is that it relies on positive and negative half-cycles of the ac output voltage waveform to achieve charge balance on the submodule capacitors. To overcome this constraint a secondary power loop is introduced that exchanges power with the primary power loops at the input and output. Power is exchanged between the primary and secondary loops by using the principle of orthogonality of power flow at different frequencies. Two modular multilevel topologies are proposed to step up or step down dc in medium- and high-voltage dc applications: the tuned filter modular multilevel dc converter and the push-pull modular multilevel dc converter. An analytical simulation of the latter converter is presented to explain the operation.", "title": "" }, { "docid": "712d73e195fabe7d51b0fabe077f4f49", "text": "Ubiquitous smart environments, equipped with low-cost and easy-deployable wireless sensor networks (WSNs) and widespread mobile ad hoc networks (MANETs), are opening brand new opportunities in wide-scale urban monitoring. Indeed, MANET and WSN convergence paves the way for the development of brand new Internet of Things (IoT) communication platforms with a high potential for a wide range of applications in different domains. Urban data collection, i.e., the harvesting of monitoring data sensed by a large number of collaborating sensors, is a challenging task because of many open technical issues, from typical WSN limitations (bandwidth, energy, delivery time, etc.) to the lack of widespread WSN data collection standards, needed for practical deployment in existing and upcoming IoT scenarios. In particular, effective collection is crucial for classes of smart city services that require a timely delivery of urgent data such as environmental monitoring, homeland security, and city surveillance. After surveying the existing WSN interoperability efforts for urban sensing, this paper proposes an original solution to integrate and opportunistically exploit MANET overlays, impromptu, and collaboratively formed over WSNs, to boost urban data harvesting in IoT. Overlays are used to dynamically differentiate and fasten the delivery of urgent sensed data over low-latency MANET paths by integrating with latest emergent standards/specifications for WSN data collection. The reported experimental results show the feasibility and effectiveness (e.g., limited coordination overhead) of the proposed solution.", "title": "" }, { "docid": "88a21d973ec80ee676695c95f6b20545", "text": "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.", "title": "" }, { "docid": "24e0fb7247644ba6324de9c86fdfeb12", "text": "There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.", "title": "" }, { "docid": "ea64ba0b1c3d4ed506fb3605893fef92", "text": "We explore frame-level audio feature learning for chord recognition using artificial neural networks. We present the argument that chroma vectors potentially hold enough information to model harmonic content of audio for chord recognition, but that standard chroma extractors compute too noisy features. This leads us to propose a learned chroma feature extractor based on artificial neural networks. It is trained to compute chroma features that encode harmonic information important for chord recognition, while being robust to irrelevant interferences. We achieve this by feeding the network an audio spectrum with context instead of a single frame as input. This way, the network can learn to selectively compensate noise and resolve harmonic ambiguities. We compare the resulting features to hand-crafted ones by using a simple linear frame-wise classifier for chord recognition on various data sets. The results show that the learned feature extractor produces superior chroma vectors for chord recognition.", "title": "" }, { "docid": "bf860507851e250a154ed798cd9a06ae", "text": "Blockchain platforms, such as Ethereum, promise to facilitate transactions on a decentralized computing platform among parties that have not established trust. Recognition of the unique challenges of blockchain programming has inspired developers to create domain-specific languages, such as Solidity, for programming blockchain systems. Unfortunately, bugs in Solidity programs have recently been exploited to steal money. We propose a new programming language, Obsidian, to make it easier for programmers to write correct programs.", "title": "" }, { "docid": "7be63b45354e6f5e29855f7fd5ffbe52", "text": "Current social media research mainly focuses on temporal trends of the information flow and on the topology of the social graph that facilitates the propagation of information. In this paper we study the effect of the content of the idea on the information propagation. We present an efficient hybrid approach based on a linear regression for predicting the spread of an idea in a given time frame. We show that a combination of content features with temporal and topological features minimizes prediction error.\n Our algorithm is evaluated on Twitter hashtags extracted from a dataset of more than 400 million tweets. We analyze the contribution and the limitations of the various feature types to the spread of information, demonstrating that content aspects can be used as strong predictors thus should not be disregarded. We also study the dependencies between global features such as graph topology and content features.", "title": "" }, { "docid": "65c823a03c6626f76f753c52e120543c", "text": "Within interaction design, several forces have coincided in the last few years to fuel the emergence of a new field of inquiry, which we summarize under the label of embodied interaction. The term was introduced to the HCI community by Dourish (2001) as a way to combine the then-distinct perspectives of tangible interaction (Ullmer & Ishii, 2001) and social computing. Briefly, his point was that computing must be approached as twice embodied: in the physical/material sense and in the sense of social fabrics and practices. Dourish’s work has been highly influential in the academic interaction design field and has to be considered a seminal contribution at the conceptual level. Still, we find that more needs to be done to create a body of contemporary designoriented knowledge on embodied interaction. Several recent developments within academia combine to inform and advance the emerging field of embodied interaction. For example, the field of wearable computing (see Mann, 1997, for an introduction to early and influential work), which can be considered a close cousin of tangible interaction, puts particular emphasis on physical bodiness and full-body interaction. The established discipline of human-computer interaction (HCI) has increasingly turned towards considering the whole body in interaction, often drawing on recent advances in cognitive science (e.g., Johnson, 2007) and philosophy (e.g., Shusterman, 2008). Some characteristic examples are the work of Twenebowa Larssen et al. (2007) on conceptualization of haptic and kinaesthetic sensations in tangible interaction and Schiphorst’s (2009) design work on the somaesthetics of interaction. Höök (2009) provides an interesting view of the “bodily turn” in HCI through the progression of four successive design cases. In more technical terms, the growing acceptance of the Internet of Things vision (which according to Dodson [2003] traces its origins to MIT around 1999) serves as a driver and enabler for realizations of embodied interaction. Finally, it should be mentioned that analytical perspectives on interaction in media studies are increasingly moving from interactivity to performativity, a concept of long standing in, for example, performance studies which turns out to have strong implications also for how interaction is seen as socially embodied (see Bardzell, Bolter, & Löwgren, 2010, for an example). The picture that emerges is one of a large and somewhat fuzzy design space, that has been predicted for quite a few years within academia but is only now becoming increasingly amenable ORIGINAL ARTICLE", "title": "" }, { "docid": "a636f977eb29b870cefe040f3089de44", "text": "We consider the network implications of virtual reality (VR) and augmented reality (AR). While there are intrinsic challenges for AR/VR applications to deliver on their promise, their impact on the underlying infrastructure will be undeniable. We look at augmented and virtual reality and consider a few use cases where they could be deployed. These use cases define a set of requirements for the underlying network. We take a brief look at potential network architectures. We then make the case for Information-centric networks as a potential architecture to assist the deployment of AR/VR and draw a list of challenges and future research directions for next generation networks to better support AR/VR.", "title": "" }, { "docid": "48b6f2cb0c9fd50619f08c433ea40068", "text": "The medicinal value of cannabis (marijuana) is well documented in the medical literature. Cannabinoids, the active ingredients in cannabis, have many distinct pharmacological properties. These include analgesic, anti-emetic, anti-oxidative, neuroprotective and anti-inflammatory activity, as well as modulation of glial cells and tumor growth regulation. Concurrent with all these advances in the understanding of the physiological and pharmacological mechanisms of cannabis, there is a strong need for developing rational guidelines for dosing. This paper will review the known chemistry and pharmacology of cannabis and, on that basis, discuss rational guidelines for dosing.", "title": "" } ]
scidocsrr
38dbf7dd1f5690bc2d8cb2b98a2cdabf
Formal Verification of Neural Network Controlled Autonomous Systems
[ { "docid": "711b8ac941db1e6e1eef093ca340717b", "text": "Deep neural networks (DNNs) have a wide range of applications, and software employing them must be thoroughly tested, especially in safety critical domains. However, traditional software testing methodology, including test coverage criteria and test case generation algorithms, cannot be applied directly to DNNs. This paper bridges this gap. First, inspired by the traditional MC/DC coverage criterion, we propose a set of four test criteria that are tailored to the distinct features of DNNs. Our novel criteria are incomparable and complement each other. Second, for each criterion, we give an algorithm for generating test cases based on linear programming (LP). The algorithms produce a new test case (i.e., an input to the DNN) by perturbing a given one. They encode the test requirement and a fragment of the DNN by fixing the activation pattern obtained from the given input example, and then minimize the difference between the new and the current inputs. Finally, we validate our method on a set of networks trained on the MNIST dataset. The utility of our method is shown experimentally with four objectives: (1) bug finding; (2) DNN safety statistics; (3) testing efficiency and (4) DNN internal structure analysis.", "title": "" }, { "docid": "c85ee4139239b17d98b0d77836e00b72", "text": "We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.", "title": "" }, { "docid": "50d6f6a65099ce0ffb804f15a9adcaa1", "text": "Machine Learning (ML) algorithms are now used in a wide range of application domains in society. Naturally, software implementations of these algorithms have become ubiquitous. Faults in ML software can cause substantial losses in these application domains. Thus, it is very critical to conduct effective testing of ML software to detect and eliminate its faults. However, testing ML software is difficult, partly because producing test oracles used for checking behavior correctness (such as using expected properties or expected test outputs) is challenging. In this paper, we propose an approach of multiple-implementation testing to test supervised learning software, a major type of ML software. In particular, our approach derives a test input’s proxy oracle from the majority-voted output running the test input of multiple implementations of the same algorithm (based on a pre-defined percentage threshold). Our approach reports likely those test inputs whose outputs (produced by an implementation under test) are different from the majority-voted outputs as failing tests. We evaluate our approach on two highly popular supervised learning algorithms: k-Nearest Neighbor (kNN) and Naive Bayes (NB). Our results show that our approach is highly effective in detecting faults in real-world supervised learning software. In particular, our approach detects 13 real faults and 1 potential fault from 19 kNN implementations and 16 real faults from 7 NB implementations. Our approach can even detect 7 real faults and 1 potential fault among the three popularly used open-source ML projects (Weka, RapidMiner,", "title": "" } ]
[ { "docid": "3ba10a680a5204b8242203e053fc3379", "text": "Recommender system has been more and more popular and widely used in many applications recently. The increasing information available, not only in quantities but also in types, leads to a big challenge for recommender system that how to leverage these rich information to get a better performance. Most traditional approaches try to design a specific model for each scenario, which demands great efforts in developing and modifying models. In this technical report, we describe our implementation of feature-based matrix factorization. This model is an abstract of many variants of matrix factorization models, and new types of information can be utilized by simply defining new features, without modifying any lines of code. Using the toolkit, we built the best single model reported on track 1 of KDDCup’11.", "title": "" }, { "docid": "1a8e9b74d4c1a32299ca08e69078c70c", "text": "Semantic Textual Similarity (STS) measures the degree of semantic equivalence between two segments of text, even though the similar context is expressed using different words. The textual segments are word phrases, sentences, paragraphs or documents. The similarity can be measured using lexical, syntactic and semantic information embedded in the sentences. The STS task in SemEval workshop is viewed as a regression problem, where real-valued output is clipped to the range 0-5 on a sentence pair. In this paper, empirical evaluations are carried using lexical, syntactic and semantic features on STS 2016 dataset. A new syntactic feature, Phrase Entity Alignment (PEA) is proposed. A phrase entity is a conceptual unit in a sentence with a subject or an object and its describing words. PEA aligns phrase entities present in the sentences based on their similarity scores. STS score is measured by combing the similarity scores of all aligned phrase entities. The impact of PEA on semantic textual equivalence is depicted using Pearson correlation between system generated scores and the human annotations. The proposed system attains a mean score of 0.7454 using random forest regression model. The results indicate that the system using the lexical, syntactic and semantic features together with PEA feature perform comparably better than existing systems.", "title": "" }, { "docid": "5878d3cdbf74928fa002ab21cc62612f", "text": "We focus on the multi-label categorization task for short texts and explore the use of a hierarchical structure (HS) of categories. In contrast to the existing work using non-hierarchical flat model, the method leverages the hierarchical relations between the categories to tackle the data sparsity problem. The lower the HS level, the worse the categorization performance. Because lower categories are fine-grained and the amount of training data per category is much smaller than that in an upper level. We propose an approach which can effectively utilize the data in the upper levels to contribute categorization in the lower levels by applying a Convolutional Neural Network (CNN) with a finetuning technique. The results using two benchmark datasets show that the proposed method, Hierarchical Fine-Tuning based CNN (HFTCNN) is competitive with the state-of-the-art CNN based methods.", "title": "" }, { "docid": "2136c0e78cac259106d5424a2985e5d7", "text": "Stylistic composition is a creative musical activity, in which students as well as renowned composers write according to the style of another composer or period. We describe and evaluate two computational models of stylistic composition, called Racchman-Oct2010 and Racchmaninof-Oct2010. The former is a constrained Markov model and the latter embeds this model in an analogy-based design system. Racchmaninof-Oct2010 applies a pattern discovery algorithm called SIACT and a perceptually validated formula for rating pattern importance, to guide the generation of a new target design from an existing source design. A listening study is reported concerning human judgments of music excerpts that are, to varying degrees, in the style of mazurkas by Frédédric Chopin (1810-1849). The listening study acts as an evaluation of the two computational models and a third, benchmark system called Experiments in Musical Intelligence (EMI). Judges’ responses indicate that some aspects of musical style, such as phrasing and rhythm, are being modeled effectively by our algorithms. Judgments are also used to identify areas for future improvements. We discuss the broader implications of this work for the fields of engineering and design, where there is potential to make use of our models of hierarchical repetitive structure. Data and code to accompany this paper are available from www.tomcollinsresearch.net", "title": "" }, { "docid": "bb80720ee3797314c71cf33f984ac094", "text": "This article reviews eight proposed strategies for solving the Symbol Grounding Problem (SGP), which was given its classic formulation in Harnad (1990). After a concise introduction, we provide an analysis of the requirement that must be satisfied by any hypothesis seeking to solve the SGP, the zero semantical commitment condition. We then use it to assess the eight strategies, which are organised into three main approaches: representationalism, semi-representationalism and nonrepresentationalism. The conclusion is that all the strategies are semantically committed and hence that none of them provides a valid solution to the SGP, which remains an open problem.", "title": "" }, { "docid": "e28b0ab1bedd60ba83b8a575431ad549", "text": "The Decision Model and Notation (DMN) is a standard notation to specify decision logic in business applications. A central construct in DMN is a decision table. The rising use of DMN decision tables to capture and to automate everyday business decisions fuels the need to support analysis tasks on decision tables. This paper presents an opensource DMN editor to tackle three analysis tasks: detection of overlapping rules, detection of missing rules and simplification of decision tables via rule merging. The tool has been tested on large decision tables derived from a credit lending data-set.", "title": "" }, { "docid": "cf24e793c307a7a6af53f160012ee926", "text": "This work presents a single- and dual-port fully integrated millimeter-wave ultra-broadband vector network analyzer. Both circuits, realized in a commercial 0.35-μm SiGe:C technology with an ft/fmax of 170/250 GHz, cover an octave frequency bandwidth between 50-100 GHz. The presented chips can be configured to measure complex scattering parameters of external devices or determine the permittivity of different materials using an integrated millimeter-wave dielectric sensor. Both devices are based on a heterodyne architecture that achieves a receiver dynamic range of 57-72.5 dB over the complete design frequency range. Two integrated frequency synthesizer modules are included in each chip that enable the generation of the required test and local-oscillator millimeter-wave signals. A measurement 3σ statistical phase error lower than 0.3 ° is achieved. Automated measurement of changes in the dielectric properties of different materials is demonstrated using the proposed systems. The single- and dual-port network analyzer chips have a current consumption of 600 and 700 mA, respectively, drawn from a single 3.3-V supply.", "title": "" }, { "docid": "472f2d8adb1c35fa7d4195323e53a8c2", "text": "Serverless computing promises to provide applications with cost savings and extreme elasticity. Unfortunately, slow application and container initialization can hurt common-case latency on serverless platforms. In this work, we analyze Linux container primitives, identifying scalability bottlenecks related to storage and network isolation. We also analyze Python applications from GitHub and show that importing many popular libraries adds about 100 ms to startup. Based on these findings, we implement SOCK, a container system optimized for serverless workloads. Careful avoidance of kernel scalability bottlenecks gives SOCK an 18× speedup over Docker. A generalized-Zygote provisioning strategy yields an additional 3× speedup. A more sophisticated three-tier caching strategy based on Zygotes provides a 45× speedup over SOCK without Zygotes. Relative to AWS Lambda and OpenWhisk, OpenLambda with SOCK reduces platform overheads by 2.8× and 5.3× respectively in an image processing case study.", "title": "" }, { "docid": "3cbc035529138be1d6f8f66a637584dd", "text": "Regression models such as the Cox proportional hazards model have had increasing use in modelling and estimating the prognosis of patients with a variety of diseases. Many applications involve a large number of variables to be modelled using a relatively small patient sample. Problems of overfitting and of identifying important covariates are exacerbated in analysing prognosis because the accuracy of a model is more a function of the number of events than of the sample size. We used a general index of predictive discrimination to measure the ability of a model developed on training samples of varying sizes to predict survival in an independent test sample of patients suspected of having coronary artery disease. We compared three methods of model fitting: (1) standard 'step-up' variable selection, (2) incomplete principal components regression, and (3) Cox model regression after developing clinical indices from variable clusters. We found regression using principal components to offer superior predictions in the test sample, whereas regression using indices offers easily interpretable models nearly as good as the principal components models. Standard variable selection has a number of deficiencies.", "title": "" }, { "docid": "6789e2e452a19da3a00b95a27994ee62", "text": "Reflection in healthcare education is an emerging topic with many recently published studies and reviews. This current systematic review of reviews (umbrella review) of this field explores the following aspects: which definitions and models are currently in use; how reflection impacts design, evaluation, and assessment; and what future challenges must be addressed. Nineteen reviews satisfying the inclusion criteria were identified. Emerging themes include the following: reflection is currently regarded as self-reflection and critical reflection, and the epistemology-of-practice notion is less in tandem with the evidence-based medicine paradigm of modern science than expected. Reflective techniques that are recognised in multiple settings (e.g., summative, formative, group vs. individual) have been associated with learning, but assessment as a research topic, is associated with issues of validity, reliability, and reproducibility. Future challenges include the epistemology of reflection in healthcare education and the development of approaches for practising and assessing reflection without loss of theoretical background.", "title": "" }, { "docid": "b83e537a2c8dcd24b096005ef0cb3897", "text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.", "title": "" }, { "docid": "6b2c009eca44ea374bb5f1164311e593", "text": "The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.", "title": "" }, { "docid": "7974d3e3e9c431256ee35c3032288bd1", "text": "Nowadays, the usage of mobile device among the community worldwide has been tremendously increased. With this proliferation of mobile devices, more users are able to access the internet for variety of online application and services. As the use of mobile devices and applications grows, the rate of vulnerabilities exploitation and sophistication of attack towards the mobile user are increasing as well. To date, Google's Android Operating System (OS) are among the widely used OS for the mobile devices, the openness design and ease of use have made them popular among developer and user. Despite the advantages the android-based mobile devices have, it also invited the malware author to exploit the mobile application on the market. Prior to this matter, this research focused on investigating the behaviour of mobile malware through hybrid approach. The hybrid approach correlates and reconstructs the result from the static and dynamic malware analysis in producing a trace of malicious event. Based on the finding, this research proposed a general mobile malware behaviour model that can contribute in identifying the key features in detecting mobile malware on an Android Platform device.", "title": "" }, { "docid": "8addf385803074288c1a07df92ed1b9f", "text": "In a permanent magnet synchronous motor where inductances vary as a function of rotor angle, the 2 phase (d-q) equivalent circuit model is commonly used for simplicity and intuition. In this article, a two phase model for a PM synchronous motor is derived and the properties of the circuits and variables are discussed in relation to the physical 3 phase entities. Moreover, the paper suggests methods of obtaining complete model parameters from simple laboratory tests. Due to the lack of developed procedures in the past, obtaining model parameters were very difficult and uncertain, because some model parameters are not directly measurable and vary depending on the operating conditions. Formulation is mainly for interior permanent magnet synchronous motors but can also be applied to surface permanent magnet motors.", "title": "" }, { "docid": "1d084096acea83a62ecc6b010b302622", "text": "The investigation of human activity patterns from location-based social networks like Twitter is an established approach of how to infer relationships and latent information that characterize urban structures. Researchers from various disciplines have performed geospatial analysis on social media data despite the data’s high dimensionality, complexity and heterogeneity. However, user-generated datasets are of multi-scale nature, which results in limited applicability of commonly known geospatial analysis methods. Therefore in this paper, we propose a geographic, hierarchical self-organizing map (Geo-H-SOM) to analyze geospatial, temporal and semantic characteristics of georeferenced tweets. The results of our method, which we validate in a case study, demonstrate the ability to explore, abstract and cluster high-dimensional geospatial and semantic information from crowdsourced data. ARTICLE HISTORY Received 8 April 2015 Accepted 19 September 2015", "title": "" }, { "docid": "cec2212f74766872cb46947f59f355a9", "text": "A Boltzmann game is an n-player repeated game, in which Boltzmann machines are employed by players to choose their optimal strategy for each round of the game. Players only have knowledge about the machine they have selected and their own strategy set. Information about other the players and the game’s pay-off function are concealed from all players. Players therefore select their strategies independent of the choices made by their opponents. A player’s pay-off, on the other hand, will be affected by the choices made by other players playing the game. As an example of this game, we play a repeated zero-sum matrix game between two Boltzmann machines. We show that a saddle point will exist for this type of Boltzmann game.", "title": "" }, { "docid": "b397d82e24f527148cb46fbabda2b323", "text": "This paper describes Illinois corn yield estimation using deep learning and another machine learning, SVR. Deep learning is a technique that has been attracting attention in recent years of machine learning, it is possible to implement using the Caffe. High accuracy estimation of crop yield is very important from the viewpoint of food security. However, since every country prepare data inhomogeneously, the implementation of the crop model in all regions is difficult. Deep learning is possible to extract important features for estimating the object from the input data, so it can be expected to reduce dependency of input data. The network model of two InnerProductLayer was the best algorithm in this study, achieving RMSE of 6.298 (standard value). This study highlights the advantages of deep learning for agricultural yield estimating.", "title": "" }, { "docid": "12f8d5a55ba9b1e773fbab5429880db6", "text": "Addiction is associated with neuroplasticity in the corticostriatal brain circuitry that is important for guiding adaptive behaviour. The hierarchy of corticostriatal information processing that normally permits the prefrontal cortex to regulate reinforcement-seeking behaviours is impaired by chronic drug use. A failure of the prefrontal cortex to control drug-seeking behaviours can be linked to an enduring imbalance between synaptic and non-synaptic glutamate, termed glutamate homeostasis. The imbalance in glutamate homeostasis engenders changes in neuroplasticity that impair communication between the prefrontal cortex and the nucleus accumbens. Some of these pathological changes are amenable to new glutamate- and neuroplasticity-based pharmacotherapies for treating addiction.", "title": "" }, { "docid": "1f4c0407c8da7b5fe685ad9763be937b", "text": "As the dominant mobile computing platform, Android has become a prime target for cyber-security attacks. Many of these attacks are manifested at the application level, and through the exploitation of vulnerabilities in apps downloaded from the popular app stores. Increasingly, sophisticated attacks exploit the vulnerabilities in multiple installed apps, making it extremely difficult to foresee such attacks, as neither the app developers nor the store operators know a priori which apps will be installed together. This paper presents an approach that allows the end-users to safeguard a given bundle of apps installed on their device from such attacks. The approach, realized in a tool, called DROIDGUARD, combines static code analysis with lightweight formal methods to automatically infer security-relevant properties from a bundle of apps. It then uses a constraint solver to synthesize possible security exploits, from which fine-grained security policies are derived and automatically enforced to protect a given device. In our experiments with over 4,000 Android apps, DROIDGUARD has proven to be highly effective at detecting previously unknown vulnerabilities as well as preventing their exploitation.", "title": "" }, { "docid": "2c4a2d41653f05060ff69f1c9ad7e1a6", "text": "Until recently the information technology (IT)-centricity was the prevailing paradigm in cyber security that was organized around confidentiality, integrity and availability of IT assets. Despite of its widespread usage, the weakness of IT-centric cyber security became increasingly obvious with the deployment of very large IT infrastructures and introduction of highly mobile tactical missions where the IT-centric cyber security was not able to take into account the dynamics of time and space bound behavior of missions and changes in their operational context. In this paper we will show that the move from IT-centricity towards to the notion of cyber attack resilient missions opens new opportunities in achieving the completion of mission goals even if the IT assets and services that are supporting the missions are under cyber attacks. The paper discusses several fundamental architectural principles of achieving cyber attack resilience of missions, including mission-centricity, survivability through adaptation, synergistic mission C2 and mission cyber security management, and the real-time temporal execution of the mission tasks. In order to achieve the overall system resilience and survivability under a cyber attack, both, the missions and the IT infrastructure are considered as two interacting adaptable multi-agent systems. While the paper is mostly concerned with the architectural principles of achieving cyber attack resilient missions, several models and algorithms that support resilience of missions are discussed in fairly detailed manner.", "title": "" } ]
scidocsrr
58c824c9fc2cb826e7bd3708ef15e8f7
Personal knowledge questions for fallback authentication: security questions in the era of Facebook
[ { "docid": "71c7c98b55b2b2a9c475d4522310cfaa", "text": "This paper studies an active underground economy which spec ializes in the commoditization of activities such as credit car d fraud, identity theft, spamming, phishing, online credential the ft, and the sale of compromised hosts. Using a seven month trace of logs c ollected from an active underground market operating on publi c Internet chat networks, we measure how the shift from “hacking for fun” to “hacking for profit” has given birth to a societal subs trate mature enough to steal wealth into the millions of dollars in less than one year.", "title": "" } ]
[ { "docid": "070fb90db924de273c4f4351dd76f4ff", "text": "Path planning algorithms have been used in different applications with the aim of finding a suitable collision-free path which satisfies some certain criteria such as the shortest path length and smoothness; thus, defining a suitable curve to describe path is essential. The main goal of these algorithms is to find the shortest and smooth path between the starting and target points. This paper makes use of a Bézier curve-based model for path planning. The control points of the Bézier curve significantly influence the length and smoothness of the path. In this paper, a novel Chaotic Particle Swarm Optimization (CPSO) algorithm has been proposed to optimize the control points of Bézier curve, and the proposed algorithm comes in two variants: CPSO-I and CPSO-II. Using the chosen control points, the optimum smooth path that minimizes the total distance between the starting and ending points is selected. To evaluate the CPSO algorithm, the results of the CPSO-I and CPSO-II algorithms are compared with the standard PSO algorithm. The experimental results proved that the proposed algorithm is capable of finding the optimal path. Moreover, the CPSO algorithm was tested against different numbers of control points and obstacles, and the CPSO algorithm achieved competitive results.", "title": "" }, { "docid": "12b855b39278c49d448fbda9aa56cacf", "text": "Human visual system (HVS) can perceive constant color under varying illumination conditions while digital images record information of both reflectance (physical color) of objects and illumination. Retinex theory, formulated by Edwin H. Land, aimed to simulate and explain this feature of HVS. However, to recover the reflectance from a given image is in general an ill-posed problem. In this paper, we establish an L1-based variational model for Retinex theory that can be solved by a fast computational approach based on Bregman iteration. Compared with previous works, our L1-Retinex method is more accurate for recovering the reflectance, which is illustrated by examples and statistics. In medical images such as magnetic resonance imaging (MRI), intensity inhomogeneity is often encountered due to bias fields. This is a similar formulation to Retinex theory while the MRI has some specific properties. We then modify the L1-Retinex method and develop a new algorithm for MRI data. We demonstrate the performance of our method by comparison with previous work on simulated and real data.", "title": "" }, { "docid": "ed189b8fa606cc2d86706d199dd71a89", "text": "This paper presents PATTY: a large resource for textual patterns that denote binary relations between entities. The patterns are semantically typed and organized into a subsumption taxonomy. The PATTY system is based on efficient algorithms for frequent itemset mining and can process Web-scale corpora. It harnesses the rich type system and entity population of large knowledge bases. The PATTY taxonomy comprises 350,569 pattern synsets. Random-sampling-based evaluation shows a pattern accuracy of 84.7%. PATTY has 8,162 subsumptions, with a random-sampling-based precision of 75%. The PATTY resource is freely available for interactive access and download.", "title": "" }, { "docid": "14f235fa9a30d8686ea5f4bfe7823fcc", "text": "Due to limited bandwidth, storage, and computational resources, and to the dynamic nature of the Web, search engines cannot index every Web page, and even the covered portion of the Web cannot be monitored continuously for changes. Therefore it is essential to develop effective crawling strategies to prioritize the pages to be indexed. The issue is even more important for topic-specific search engines, where crawlers must make additional decisions based on the relevance of visited pages. However, it is difficult to evaluate alternative crawling strategies because relevant sets are unknown and the search space is changing. We propose three different methods to evaluate crawling strategies. We apply the proposed metrics to compare three topic-driven crawling algorithms based on similarity ranking, link analysis, and adaptive agents.", "title": "" }, { "docid": "5d1201b73e36ea7ff5b8da6a9720109d", "text": "The ubiquitous cases of abnormal transactions with intent to defraud is a global phenomenon. An architecture that enhances fraud detection using a radial basis function network was designed using a supervised data mining technique― radial basis function (RBF) network, interpolation approximation method. Several base models were thus created, and in turn used in aggregation to select the optimum model using the misclassification erro accuracy, sensitivity, specificity and receiver operating characteristics (ROC) metrics. The results shows model has a zero-tolerance for fraud with better especially in cases where there were no fraud doubtful cases were rather flagged than to allow a fraud incident to pass undetected. Expectedly, the model’s computations converge faster at 200 iterations. generic with similar characteristics with other classification methods but distinct parameters thereby minimizing the time and cost of fraud detection by adopting computationally efficient algorithm.", "title": "" }, { "docid": "227ad7173deb06c2d492bb27ce70f5df", "text": "A public service motivation (PSM) inclines employees to provide effort out of concern for the impact of that effort on a valued social service. Though deemed to be important in the literature on public administration, this motivation has not been formally considered by economists. When a PSM exists, this paper establishes conditions under which government bureaucracy can better obtain PSM motivated effort from employees than a standard profit maximizing firm. The model also provides an efficiency rationale for low-powered incentives in both bureaucracies and other organizations producing social services.  2000 Elsevier Science S.A. All rights reserved.", "title": "" }, { "docid": "5236f684bc0fdf11855a439c9d3256f6", "text": "The smart home is an environment, where heterogeneous electronic devices and appliances are networked together to provide smart services in a ubiquitous manner to the individuals. As the homes become smarter, more complex, and technology dependent, the need for an adequate security mechanism with minimum individual’s intervention is growing. The recent serious security attacks have shown how the Internet-enabled smart homes can be turned into very dangerous spots for various ill intentions, and thus lead the privacy concerns for the individuals. For instance, an eavesdropper is able to derive the identity of a particular device/appliance via public channels that can be used to infer in the life pattern of an individual within the home area network. This paper proposes an anonymous secure framework (ASF) in connected smart home environments, using solely lightweight operations. The proposed framework in this paper provides efficient authentication and key agreement, and enables devices (identity and data) anonymity and unlinkability. One-time session key progression regularly renews the session key for the smart devices and dilutes the risk of using a compromised session key in the ASF. It is demonstrated that computation complexity of the proposed framework is low as compared with the existing schemes, while security has been significantly improved.", "title": "" }, { "docid": "b3254d97e8c8f87e74b7e9ac10a5e7e7", "text": "In this paper, a full-vehicle active suspension system is designed to simultaneously improve vehicle ride comfort and steady-state handling performance. First, a linear suspension model of a vehicle and a nonlinear handling model are described. Next, the link between the suspension model and vehicle steady-state handling characteristics is analysed. Then, an H-infinity controller for the suspension is designed to achieve integrated ride-comfort and handling control. Finally, the controller is verified by computer simulations.", "title": "" }, { "docid": "810158f2907eec894e54a57dabb2b9c4", "text": "Dependability properties of bi-directional and braided rings are well recognized in improving communication availability. However, current ring-based topologies have no mechanisms for extreme integrity and have not been considered for emerging high-dependability markets where cost is a significant driver, such as the automotive \"by-wire\" applications. This paper introduces a braided-ring architecture with superior guardian functionality and complete Byzantine fault tolerance while simultaneously reducing cost. This paper reviews anticipated requirements for high-dependability low-cost applications and emphasizes the need for regular safe testing of core coverage functions. The paper describes the ring's main mechanisms for achieving integrity and availability levels similar to SAFEbus/spl reg/ but at low automotive costs. The paper also presents a mechanism to achieve self-stabilizing TDMA-based communication and design methods for fault-tolerant protocols on a network of simplex nodes. The paper also introduces a new self-checking pair concept that leverages braided-ring properties. This novel message-based self-checking-pair concept allows high-integrity source data at extremely low cost.", "title": "" }, { "docid": "30e22be2c7383e90a6fd16becc34a586", "text": "SUMMARY\nThe etiology of age-related facial changes has many layers. Multiple theories have been presented over the past 50-100 years with an evolution of understanding regarding facial changes related to skin, soft tissue, muscle, and bone. This special topic will provide an overview of the current literature and evidence and theories of facial changes of the skeleton, soft tissues, and skin over time.", "title": "" }, { "docid": "0116f3e12fbaf2705f36d658fdbe66bb", "text": "This paper presents a metric to quantify visual scene movement perceived inside a virtual environment (VE) and illustrates how this method could be used in future studies to determine a cybersickness dose value to predict levels of cybersickness in VEs. Sensory conflict theories predict that cybersickness produced by a VE is a kind of visually induced motion sickness. A comprehensive review indicates that there is only one subjective measure to quantify visual stimuli presented inside a VE. A metric, referred to as spatial velocity (SV), is proposed. It combines objective measures of scene complexity and scene movement velocity. The theoretical basis for the proposed SV metric and the algorithms for its implementation are presented. Data from two previous experiments on cybersickness were reanalyzed using the metric. Results showed that increasing SV by either increasing the scene complexity or scene velocity significantly increased the rated level of cybersickness. A strong correlation between SV and the level of cybersickness was found. The use of the spatial velocity metric to predict levels of cybersickness is also discussed.", "title": "" }, { "docid": "9086d8f1d9a0978df0bd93cff4bce20a", "text": "Australian government enterprises have shown a significant interest in the cloud technology-enabled enterprise transformation. Australian government suggests the whole-of-a-government strategy to cloud adoption. The challenge is how best to realise this cloud adoption strategy for the cloud technology-enabled enterprise transformation? The cloud adoption strategy realisation requires concrete guidelines and a comprehensive practical framework. This paper proposes the use of an agile enterprise architecture framework to developing and implementing the adaptive cloud technology-enabled enterprise architecture in the Australian government context. The results of this paper indicate that a holistic strategic agile enterprise architecture approach seems appropriate to support the strategic whole-of-a-government approach to cloud technology-enabled government enterprise transformation.", "title": "" }, { "docid": "e0fff766f9ae7834d94ef8e6d444363c", "text": "Air-gap data is important for the security of computer systems. The injection of the computer virus is limited but possible, however data communication channel is necessary for the transmission of stolen data. This paper considers BFSK digital modulation applied to brightness changes of screen for unidirectional transmission of valuable data. Experimental validation and limitations of the proposed technique are provided.", "title": "" }, { "docid": "37a0f090bf6d9b9d6fa734644b3db131", "text": "We consider the problem of multi-task reinforcement learni ng where the learner is provided with a set of tasks, for which only a small number o f samples can be generated for any given policy. As the number of samples may n ot be enough to learn an accurate evaluation of the policy, it would be neces sary to identify classes of tasks with similar structure and to learn them jointly. We consider the case where the tasks share structure in their value functions, an d model this by assuming that the value functions are all sampled from a common pri or. We adopt the Gaussian process temporal-difference value function mode l and use a hierarchical Bayesian approach to model the distribution over the value f unctions. We study two cases, where all the value functions belong to the same cl ass and where they belong to an undefined number of classes. For each case, we pre sent a hierarchical Bayesian model, and derive inference algorithms for (i) joint learning of the value functions, and (ii) efficient transfer of the informat ion gained in (i) to assist learning the value function of a newly observed task.", "title": "" }, { "docid": "d9e0fd8abb80d6256bd86306b7112f20", "text": "Visible light LEDs, due to their numerous advantages, are expected to become the dominant indoor lighting technology. These lights can also be switched ON/OFF at high frequency, enabling their additional use for wireless communication and indoor positioning. In this article, visible LED light--based indoor positioning systems are surveyed and classified into two broad categories based on the receiver structure. The basic principle and architecture of each design category, along with various position computation algorithms, are discussed and compared. Finally, several new research, implementation, commercialization, and standardization challenges are identified and highlighted for this relatively novel and interesting indoor localization technology.", "title": "" }, { "docid": "e17d399b0a69ba47fdc82d9051191f86", "text": "The goal of inverse reinforcement learning is to find a reward function for a Markov decision process, given example traces from its optimal policy. Current IRL techniques generally rely on user-supplied features that form a concise basis for the reward. We present an algorithm that instead constructs reward features from a large collection of component features, by building logical conjunctions of those component features that are relevant to the example policy. Given example traces, the algorithm returns a reward function as well as the constructed features. The reward function can be used to recover a full, deterministic, stationary policy, and the features can be used to transplant the reward function into any novel environment on which the component features are well defined.", "title": "" }, { "docid": "3106e2105ac14674e1b13f006f352d75", "text": "Cloud computing is an emerging new computing paradigm for delivering computing services. The approach relies on a number of existing technologies e.g., the Internet, virtualization and grid computing. However, the provision of this service in a pay-as-you-go way through the popular medium of the Internet renders this computing service approach unique compared with currently available computing", "title": "" }, { "docid": "9e292d43355dbdbcf6360c88e49ba38b", "text": "This paper proposes stacked dual-patch CP antenna for GPS and SDMB services. The characteristic of CP at dual-frequency bands is achieved with a circular patch truncated corners with ears at diagonal direction. According to the dimensions of the truncated corners as well as spacing between centers of the two via-holes, the axial ratio of the CP antenna can be controlled. The good return loss results were obtained both at GPS and SDMB bands. The measured gains of the antenna system are 2.3 dBi and 2.4 dBi in GPS and SDMB bands, respectively. The measured axial ratio is slightly shifted frequencies due to diameter variation of via-holes and the spacing between lower patch and upper patch. The proposed low profile, low-cost fabrication, dual circularly polarization, and separated excitation ports make the proposed stacked antenna an applicable solution as a multi-functional antenna for GPS and SDMB operation on vehicle.", "title": "" }, { "docid": "0873dd0181470d722f0efcc8f843eaa6", "text": "Compared to traditional service, the characteristics of the customer behavior in electronic service are personalized demand, convenient consumed circumstance and perceptual consumer behavior. Therefore, customer behavior is an important factor to facilitate online electronic service. The purpose of this study is to explore the key success factors affecting customer purchase intention of electronic service through the behavioral perspectives of customers. Based on the theory of technology acceptance model (TAM) and self service technology (SST), the study proposes a theoretical model for the empirical examination of the customer intention for purchasing electronic services. A comprehensive survey of online customers having e-shopping experiences is undertaken. Then this model is tested by means of the statistical analysis method of structure equation model (SEM). The empirical results indicated that perceived usefulness and perceived assurance have a significant impact on purchase in e-service. Discussion and implication are presented in the end.", "title": "" }, { "docid": "c408992e89867e583b8232b18f37edf0", "text": "Fusion of information gathered from multiple sources is essential to build a comprehensive situation picture for autonomous ground vehicles. In this paper, an approach which performs scene parsing and data fusion for a 3D-LIDAR scanner (Velodyne HDL-64E) and a video camera is described. First of all, a geometry segmentation algorithm is proposed for detection of obstacles and ground areas from data collected by the Velodyne scanner. Then, corresponding image collected by the video camera is classified patch by patch into more detailed categories. After that, parsing result of each frame is obtained by fusing result of Velodyne data and that of image using the fuzzy logic inference framework. Finally, parsing results of consecutive frames are smoothed by the Markov random field based temporal fusion method. The proposed approach has been evaluated with datasets collected by our autonomous ground vehicle testbed in both rural and urban areas. The fused results are more reliable than that acquired via analysis of only images or Velodyne data. 2013 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
8cb51438054b85856ec04ee558fca7ff
Building a Multi-tenant Cloud Service from Legacy Code with Docker Containers
[ { "docid": "0cd74192bd0ec4e8e7a37d7d95179e0a", "text": "Recently, Linux container technology has been gaining attention as it promises to transform the way software is developed and deployed. The portability and ease of deployment makes Linux containers an ideal technology to be used in scientific workflow platforms. Skyport utilizes Docker Linux containers to solve software deployment problems and resource utilization inefficiencies inherent to all existing scientific workflow platforms. As an extension to AWE/Shock, our data analysis platform that provides scalable workflow execution environments for scientific data in the cloud, Skyport greatly reduces the complexity associated with providing the environment necessary to execute complex workflows.", "title": "" } ]
[ { "docid": "1186bb5c96eebc26ce781d45fae7768d", "text": "Essential genes are required for the viability of an organism. Accurate and rapid identification of new essential genes is of substantial theoretical interest to synthetic biology and has practical applications in biomedicine. Fractals provide facilitated access to genetic structure analysis on a different scale. In this study, machine learning-based methods using solely fractal features are presented and the problem of predicting essential genes in bacterial genomes is evaluated. Six fractal features were investigated to learn the parameters of five supervised classification methods for the binary classification task. The optimal parameters of these classifiers are determined via grid-based searching technique. All the currently available identified genes from the database of essential genes were utilized to build the classifiers. The fractal features were proven to be more robust and powerful in the prediction performance. In a statistical sense, the ELM method shows superiority in predicting the essential genes. Non-parameter tests of the average AUC and ACC showed that the fractal feature is much better than other five compared features sets. Our approach is promising and convenient to identify new bacterial essential genes.", "title": "" }, { "docid": "de4c44363fd6bb6da7ec0c9efd752213", "text": "Modeling the structure of coherent texts is a task of great importance in NLP. The task of organizing a given set of sentences into a coherent order has been commonly used to build and evaluate models that understand such structure. In this work we propose an end-to-end neural approach based on the recently proposed set to sequence mapping framework to address the sentence ordering problem. Our model achieves state-of-the-art performance in the order discrimination task on two datasets widely used in the literature. We also consider a new interesting task of ordering abstracts from conference papers and research proposals and demonstrate strong performance against recent methods. Visualizing the sentence representations learned by the model shows that the model has captured high level logical structure in these paragraphs. The model also learns rich semantic sentence representations by learning to order texts, performing comparably to recent unsupervised representation learning methods in the sentence similarity and paraphrase detection tasks.", "title": "" }, { "docid": "afd6d41c0985372a88ff3bb6f91ce5b5", "text": "Following your need to always fulfil the inspiration to obtain everybody is now simple. Connecting to the internet is one of the short cuts to do. There are so many sources that offer and connect us to other world condition. As one of the products to see in internet, this website becomes a very available place to look for countless ggplot2 elegant graphics for data analysis sources. Yeah, sources about the books from countries in the world are provided.", "title": "" }, { "docid": "b0e5885587ab3796fe1ed0490ddda1bd", "text": "BACKGROUND\nEpicanthal deformity is one of the most frequently encountered cosmetic problems in Asian people. Herein, we introduce a new method for correction of epicanthal folds, which always is performed in combination with double eyelidplasty.\n\n\nMETHODS\nFirst, through upper and lower palpebral margin incisions, we release and excise the connective and orbicularis oculi muscle dense fibres underlying the epicanthal folds, as well as the superficial head of the medial canthal ligament. After repositioning the medial canthus in a double eyelidplastic procedure, we cut off the redundant skin tissue and close the incisions.\n\n\nRESULTS\n82 epicanthoplasties have been performed during the past 2 years. Follow-up time ranged from 1 to 32 months. Postsurgery scars were invisible in all cases. All patients were satisfied with the results. No recurrence of the epicanthal fold was observed.\n\n\nCONCLUSION\nThe new method introduced has advantages in avoiding scar formation and is an especially suitable approach for epicanthoplasty in Asian patients.", "title": "" }, { "docid": "16dd74e72700ce82502f75054b5c3fe6", "text": "Multiple access (MA) technology is of most importance for 5G. Non-orthogonal multiple access (NOMA) utilizing power domain and advanced receiver has been considered as a promising candidate MA technology recently. In this paper, the NOMA concept is presented toward future enhancements of spectrum efficiency in lower frequency bands for downlink of 5G system. Key component technologies of NOMA are presented and discussed including multiuser transmission power allocation, scheduling algorithm, receiver design and combination of NOMA with multi-antenna technology. The performance gains of NOMA are evaluated by system-level simulations with very practical assumptions. Under multiple configurations and setups, the achievable system-level gains of NOMA are shown promising even when practical considerations were taken into account.", "title": "" }, { "docid": "46674077de97f82bc543f4e8c0a8243a", "text": "Recently, multiple formulations of vision problems as probabilistic inversions of generative models based on computer graphics have been proposed. However, applications to 3D perception from natural images have focused on low-dimensional latent scenes, due to challenges in both modeling and inference. Accounting for the enormous variability in 3D object shape and 2D appearance via realistic generative models seems intractable, as does inverting even simple versions of the many-tomany computations that link 3D scenes to 2D images. This paper proposes and evaluates an approach that addresses key aspects of both these challenges. We show that it is possible to solve challenging, real-world 3D vision problems by approximate inference in generative models for images based on rendering the outputs of probabilistic CAD (PCAD) programs. Our PCAD object geometry priors generate deformable 3D meshes corresponding to plausible objects and apply affine transformations to place them in a scene. Image likelihoods are based on similarity in a feature space based on standard mid-level image representations from the vision literature. Our inference algorithm integrates single-site and locally blocked Metropolis-Hastings proposals, Hamiltonian Monte Carlo and discriminative datadriven proposals learned from training data generated from our models. We apply this approach to 3D human pose estimation and object shape reconstruction from single images, achieving quantitative and qualitative performance improvements over state-of-the-art baselines.", "title": "" }, { "docid": "393513f676132d333bb1ebff884da7b7", "text": "This paper reports an investigation of some methods for isolating, or segmenting, characters during the reading of machineprinted text by optical character recognition systems. Two new segmentation algorithms using feature extraction techniques are presented; both are intended for use in the recognition of machine-printed lines of lo-, 11and 12-pitch serif-type multifont characters. One of the methods, called quasi-topological segmentation, bases the decision to “section” a character on a combination of featureextraction and character-width measurements. The other method, topological segmentation, involves feature extraction alone. The algorithms have been tested with an evaluation method that is independent of any particular recognition system. Test results are based on application of the algorithm to upper-case alphanumeric characters gathered from print sources that represent the existing world of machine printing. The topological approach demonstrated better performance on the test data than did the quasitopological approach. Introduction When character recognition systems are structured to recognize one character at a time, some means must be provided to divide the incoming data stream into segments that define the beginning and end of each character. Writing about this aspect of pattern recognition in his review article, G. Nagy [l] stated that “object isolation is all too often ignored in laboratory studies. Yet touching characters are responsible for the majority of errors in the automatic reading of both machine-printed and hand-printed text. . . . ” The importance of the touching-character problem in the design of practical character recognition machines motivated the laboratory study reported in this paper. We present two new algorithms for separating upper-case serif characters, develop a general philosophy for evaluating the effectiveness of segmentation algorithms, and evaluate the performance of our algorithms when they are applied to lo-, 11and 12-pitch alphanumeric characters. The segmentation algorithms were developed specifically for potential use with recognition systems that use a raster-type scanner to produce an analog video signal that is digitized before presentation of the data to the recognition logic. The raster is assumed to move from right to left across a line of printed characters and to make approximately 20 vertical scans per character. This approach to recognition technology is the one most commonly used in IBM’s current optical character recognition machines. A paper on the IBM 1975 Optical Page Reader [2] gives one example of how the approach has been implemented. Other approaches to recognition technology may not require that decisions be made to identify the beginning and end of characters. Nevertheless, the performance of any recognition system is affected by the presence of touching characters and the design of recognition algorithms must take the problem into account (see Clayden, Clowes and Parks [3]). Simple character recognition systeMs of the type we are concerned with perform segmentation by requiring that bit patterns of characters be separated by scans containing no “black” bits. However, this method is rarely adequate to separate characters printed in the common business-machine and typewriter fonts. These fonts, after all, were not designed with machine recognition in mind; but they are nevertheless the fonts it is most desirable for a machine to be able to recognize. In the 12-pitch, serif-type fonts examined for the present study, up to 35 percent of the segments occurred not at blank scans, but within touching character pairs. 153 SEGMENTATION ALGORITHMS MARCH 1971", "title": "" }, { "docid": "580bdf8197e94c5bc82bc52bcc7cf6c7", "text": "This article reports a theoretical and experimental attempt to relate and contrast 2 traditionally separate research programs: inattentional blindness and attention capture. Inattentional blindness refers to failures to notice unexpected objects and events when attention is otherwise engaged. Attention capture research has traditionally used implicit indices (e.g., response times) to investigate automatic shifts of attention. Because attention capture usually measures performance whereas inattentional blindness measures awareness, the 2 fields have existed side by side with no shared theoretical framework. Here, the authors propose a theoretical unification, adapting several important effects from the attention capture literature to the context of sustained inattentional blindness. Although some stimulus properties can influence noticing of unexpected objects, the most influential factor affecting noticing is a person's own attentional goals. The authors conclude that many--but not all--aspects of attention capture apply to inattentional blindness but that these 2 classes of phenomena remain importantly distinct.", "title": "" }, { "docid": "47aec03cf18dc3abd4d46ee017f25a16", "text": "Cues of phenotypic condition should be among those used by women in their choice of mates. One marker of better phenotypic condition is thought to be symmetrical bilateral body and facial features. However, it is not clear whether women use symmetry as the primary cue in assessing the phenotypic quality of potential mates or whether symmetry is correlated with other facial markers affecting physical attractiveness. Using photographs of men's faces, for which facial symmetry had been measured, we found a relationship between women's attractiveness ratings of these faces and symmetry, but the subjects could not rate facial symmetry accurately. Moreover, the relationship between facial attractiveness and symmetry was still observed, even when symmetry cues were removed by presenting only the left or right half of faces. These results suggest that attractive features other than symmetry can be used to assess phenotypic condition. We identified one such cue, facial masculinity (cheek-bone prominence and a relatively longer lower face), which was related to both symmetry and full- and half-face attractiveness.", "title": "" }, { "docid": "f6bd54cb95a95e15496479acc8559b06", "text": "We describe the third generation of the CAP sequence assembly program. The CAP3 program includes a number of improvements and new features. The program has a capability to clip 5' and 3' low-quality regions of reads. It uses base quality values in computation of overlaps between reads, construction of multiple sequence alignments of reads, and generation of consensus sequences. The program also uses forward-reverse constraints to correct assembly errors and link contigs. Results of CAP3 on four BAC data sets are presented. The performance of CAP3 was compared with that of PHRAP on a number of BAC data sets. PHRAP often produces longer contigs than CAP3 whereas CAP3 often produces fewer errors in consensus sequences than PHRAP. It is easier to construct scaffolds with CAP3 than with PHRAP on low-pass data with forward-reverse constraints.", "title": "" }, { "docid": "9a0530ae13507d14b66ee74ec05c43bd", "text": "The paper investigates the role of the government and self-regulatory reputation mechanisms to internalise externalities of market operation. If it pays off for companies to invest in a good reputation by an active policy of corporate social responsibility (CSR), external effects of the market will be (partly) internalised by the market itself. The strength of the reputation mechanism depends on the functioning of non governmental organisations (NGOs), the transparency of the company, the time horizon of the company, and on the behaviour of employees, consumers and investors. On the basis of an extensive study of the empirical literature on these topics, we conclude that in general the working of the reputation mechanism is rather weak. Especially the transparency of companies is a bottleneck. If the government would force companies to be more transparent, it could initiate a self-enforcing spiral that would improve the working of the reputation mechanism. We also argue that the working of the reputation mechanism will be weaker for smaller companies and for both highly competitive and monopolistic markets. We therefore conclude that government regulation is still necessary, especially for small companies. Tijdschrift voor Economie en Management Vol. XLIX, 2, 2004", "title": "" }, { "docid": "1167ab5a79d1c29adcf90e2b0c28a79e", "text": "Prior research has shown that within a racial category, people with more Afrocentric facial features are presumed more likely to have traits that are stereotypic of Black Americans compared with people with less Afrocentric features. The present study investigated whether this form of feature-based stereotyping might be observed in criminal-sentencing decisions. Analysis of a random sample of inmate records showed that Black and White inmates, given equivalent criminal histories, received roughly equivalent sentences. However, within each race, inmates with more Afrocentric features received harsher sentences than those with less Afrocentric features. These results are consistent with laboratory findings, and they suggest that although racial stereotyping as a function of racial category has been successfully removed from sentencing decisions, racial stereotyping based on the facial features of the offender is a form of bias that is largely overlooked.", "title": "" }, { "docid": "6e17362c0e6a4d3190b3c8b0a11d6844", "text": "A transimpedance amplifier (TIA) has been designed in a 0.35 μm digital CMOS technology for Gigabit Ethernet. It is based on the structure proposed by Mengxiong Li [1]. This paper presents an amplifier which exploits the regulated cascode (RGC) configuration as the input stage with an integrated optical receiver which consists of an integrated photodetector, thus achieving as large effective input transconductance as that of Si Bipolar or GaAs MESFET. The RGC input configuration isolates the input parasitic capacitance including photodiode capacitance from the bandwidth determination better than common-gate TIA. A series inductive peaking is used for enhancing the bandwidth. The proposed TIA has transimpedance gain of 51.56 dBΩ, and 3-dB bandwidth of 6.57 GHz with two inductor between the RGC and source follower for 0.1 pF photodiode capacitance. The proposed TIA has an input courant noise level of about 21.57 pA/Hz0.5 and it consumes DC power of 16 mW from 3.3 V supply voltage.", "title": "" }, { "docid": "b4d92c6573f587c60d135b8fa579aade", "text": "Knowing the structure of criminal and terrorist networks could provide the technical insight needed to disrupt their activities.", "title": "" }, { "docid": "37c35b782bb80d2324749fc71089c445", "text": "Predicting the stock market is considered to be a very difficult task due to its non-linear and dynamic nature. Our proposed system is designed in such a way that even a layman can use it. It reduces the burden on the user. The user’s job is to give only the recent closing prices of a stock as input and the proposed Recommender system will instruct him when to buy and when to sell if it is profitable or not to buy share in case if it is not profitable to do trading. Using soft computing based techniques is considered to be more suitable for predicting trends in stock market where the data is chaotic and large in number. The soft computing based systems are capable of extracting relevant information from large sets of data by discovering hidden patterns in the data. Here regression trees are used for dimensionality reduction and clustering is done with the help of Self Organizing Maps (SOM). The proposed system is designed to assist stock market investors identify possible profit-making opportunities and also help in developing a better understanding on how to extract the relevant information from stock price data. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "436369a1187f436290ae9b61f3e9ee1e", "text": "In this paper we propose a sub-band energy based end-ofutterance algorithm that is capable of detecting the time instant when the user has stopped speaking. The proposed algorithm finds the time instant at which many enough sub-band spectral energy trajectories fall and stay for a pre-defined fixed time below adaptive thresholds, i.e. a non-speech period is detected after the end of the utterance. With the proposed algorithm a practical speech recognition system can give timely feedback for the user, thereby making the behaviour of the speech recognition system more predictable and similar across different usage environments and noise conditions. The proposed algorithm is shown to be more accurate and noise robust than the previously proposed approaches. Experiments with both isolated command word recognition and continuous digit recognition in various noise conditions verify the viability of the proposed approach with an average proper endof-utterance detection rate of around 94% in both cases, representing 43% error rate reduction over the most competitive previously published method.", "title": "" }, { "docid": "af26f31ccb047f15c3c9e1999d305f01", "text": "More and more images have been generated in digital form around the world. There is a growing interest in 1nding images in large collections or from remote databases. In order to 1nd an image, the image has to be described or represented by certain features. Shape is an important visual feature of an image. Searching for images using shape features has attracted much attention. There are many shape representation and description techniques in the literature. In this paper, we classify and review these important techniques. We examine implementation procedures for each technique and discuss its advantages and disadvantages. Some recent research results are also included and discussed in this paper. Finally, we identify some promising techniques for image retrieval according to standard principles. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "590cf6884af6223ce4e827ba2fe18209", "text": "1. The extracellular patch clamp method, which first allowed the detection of single channel currents in biological membranes, has been further refined to enable higher current resolution, direct membrane patch potential control, and physical isolation of membrane patches. 2. A description of a convenient method for the fabrication of patch recording pipettes is given together with procedures followed to achieve giga-seals i.e. pipettemembrane seals with resistances of 109–1011Ω. 3. The basic patch clamp recording circuit, and designs for improved frequency response are described along with the present limitations in recording the currents from single channels. 4. Procedures for preparation and recording from three representative cell types are given. Some properties of single acetylcholine-activated channels in muscle membrane are described to illustrate the improved current and time resolution achieved with giga-seals. 5. A description is given of the various ways that patches of membrane can be physically isolated from cells. This isolation enables the recording of single channel currents with well-defined solutions on both sides of the membrane. Two types of isolated cell-free patch configurations can be formed: an inside-out patch with its cytoplasmic membrane face exposed to the bath solution, and an outside-out patch with its extracellular membrane face exposed to the bath solution. 6. The application of the method for the recording of ionic currents and internal dialysis of small cells is considered. Single channel resolution can be achieved when recording from whole cells, if the cell diameter is small (<20μm). 7. The wide range of cell types amenable to giga-seal formation is discussed. The extracellular patch clamp method, which first allowed the detection of single channel currents in biological membranes, has been further refined to enable higher current resolution, direct membrane patch potential control, and physical isolation of membrane patches. A description of a convenient method for the fabrication of patch recording pipettes is given together with procedures followed to achieve giga-seals i.e. pipettemembrane seals with resistances of 109–1011Ω. The basic patch clamp recording circuit, and designs for improved frequency response are described along with the present limitations in recording the currents from single channels. Procedures for preparation and recording from three representative cell types are given. Some properties of single acetylcholine-activated channels in muscle membrane are described to illustrate the improved current and time resolution achieved with giga-seals. A description is given of the various ways that patches of membrane can be physically isolated from cells. This isolation enables the recording of single channel currents with well-defined solutions on both sides of the membrane. Two types of isolated cell-free patch configurations can be formed: an inside-out patch with its cytoplasmic membrane face exposed to the bath solution, and an outside-out patch with its extracellular membrane face exposed to the bath solution. The application of the method for the recording of ionic currents and internal dialysis of small cells is considered. Single channel resolution can be achieved when recording from whole cells, if the cell diameter is small (<20μm). The wide range of cell types amenable to giga-seal formation is discussed.", "title": "" }, { "docid": "d9176322068e6ca207ae913b1164b3da", "text": "Topic Detection and Tracking (TDT) is a variant of classiication in which the classes are not known or xed in advance. Consider for example an incoming stream of news articles or email messages that are to be classiied by topic; new classes must be created as new topics arise. The problem is a challenging one for machine learning. Instances of new topics must be recognized as not belonging to any of the existing classes (detection), and instances of old topics must be correctly classiied (tracking)|often with extremely little training data per class. This paper proposes a new approach to TDT based on probabilis-tic, generative models. Strong statistical techniques are used to address the many challenges: hierarchical shrinkage for sparse data, statistical \\garbage collection\" for new event detection, clustering in time to separate the diierent events of a common topic, and deterministic anneal-ing for creating the hierarchy. Preliminary experimental results show promise.", "title": "" }, { "docid": "43de53a8c215d7b3ecf6252253abe3ed", "text": "Semantic mapping is a very active and growing research area, with important applications in indoor and outdoor robotic applications. However, most of the research on semantic mapping has focused on indoor mapping and there is a need for developing semantic mapping methodologies for large-scale outdoor scenarios. In this work, a novel semantic mapping methodology for large-scale outdoor scenes in autonomous off-road driving applications is proposed. The semantic map representation consists of a large-scale topological map built using semantic image information. Thus, the proposed representation aims to solve the large-scale outdoors semantic mapping problem by using a graph based topological map, where relevant information for autonomous driving is added using semantic information from the image description. As a proof of concept, the proposed methodology is applied to the semantic map building of a real outdoor scenario.", "title": "" } ]
scidocsrr
444d7ac9b90d073f01a2feb8e931d7e4
EigenRank: a ranking-oriented approach to collaborative filtering
[ { "docid": "f7bddfb1142605fd6c3a784f454f81eb", "text": "Although the interest of a Web page is strictly related to its content and to the subjective readers' cultural background, a measure of the page authority can be provided that only depends on the topological structure of the Web. PageRank is a noticeable way to attach a score to Web pages on the basis of the Web connectivity. In this article, we look inside PageRank to disclose its fundamental properties concerning stability, complexity of computational scheme, and critical role of parameters involved in the computation. Moreover, we introduce a circuit analysis that allows us to understand the distribution of the page score, the way different Web communities interact each other, the role of dangling pages (pages with no outlinks), and the secrets for promotion of Web pages.", "title": "" } ]
[ { "docid": "00309acd08acb526f58a70ead2d99249", "text": "As mainstream news media and political campaigns start to pay attention to the political discourse online, a systematic analysis of political speech in social media becomes more critical. What exactly do people say on these sites, and how useful is this data in estimating political popularity? In this study we examine Twitter discussions surrounding seven US Republican politicians who were running for the US Presidential nomination in 2011. We show this largely negative rhetoric to be laced with sarcasm and humor and dominated by a small portion of users. Furthermore, we show that using out-of-the-box classification tools results in a poor performance, and instead develop a highly optimized multi-stage approach designed for general-purpose political sentiment classification. Finally, we compare the change in sentiment detected in our dataset before and after 19 Republican debates, concluding that, at least in this case, the Twitter political chatter is not indicative of national political polls.", "title": "" }, { "docid": "044f50e702877b40d225ba63c49a674a", "text": "Purpose – The purpose of this paper is to examine the impact of some organizational information technology (IT) factors (i.e. IT assets, employees’ IT skills, IT resources, and satisfaction with legacy IT systems) and their interacting effects with two contingency factors (i.e. organization’s size and structure) on enterprise resource planning (ERP) system success. Design/methodology/approach – Surveys were conducted in two European countries. Respondents came from diverse, private, and industrial organizations. Relevant hypotheses were developed and tested using a structural equation modeling technique. Findings – The analysis supported – partially or fully – six of the eight hypotheses formulated. For example, the data indicated strong positive relationships between IT assets and IT resources, on the one hand, and ERP success, on the other. Organization’s size and structure were also found to be moderators in some of the relationships. Also, the analysis revealed that satisfaction with legacy IT systems increased with ERP success, which was an unexpected finding. Originality/value – This study contributes to the literature, being among the few to investigate the effects of organizational IT factors and their interacting effects with relevant contingency factors in the context of ERP system success. Methodologically, the study utilized a “non-deterministic” model to facilitate deeper insights into the effects of variables.", "title": "" }, { "docid": "0dc1bf3422e69283a93d0dd87caeb84f", "text": "Organizations are increasingly recognizing that user satisfaction with information systems is one of the most important determinants of the success of those systems. However, current satisfaction measures involve an intrusion into the users' worlds, and are frequently deemed to be too cumbersome to be justi®ed ®nancially and practically. This paper describes a methodology designed to solve this contemporary problem. Based on theory which suggests that behavioral observations can be used to measure satisfaction, system usage statistics from an information system were captured around the clock for 6 months to determine users' satisfaction with the system. A traditional satisfaction evaluation instrument, a validated survey, was applied in parallel, to verify that the analysis of the behavioral data yielded similar results. The ®nal results were analyzed statistically to demonstrate that behavioral analysis is a viable alternative to the survey in satisfaction measurement. # 1999 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "51006272066adbd5e12991bf15358ef3", "text": "Susceptibility tensor imaging (STI) is a recently developed MRI technique that allows quantitative determination of orientation-independent magnetic susceptibility parameters from the dependence of gradient echo signal phase on the orientation of biological tissues with respect to the main magnetic field. By modeling the magnetic susceptibility of each voxel as a symmetric rank-2 tensor, individual magnetic susceptibility tensor elements as well as the mean magnetic susceptibility and magnetic susceptibility anisotropy can be determined for brain tissues that would still show orientation dependence after conventional scalar-based quantitative susceptibility mapping to remove such dependence. Similar to diffusion tensor imaging, STI allows mapping of brain white matter fiber orientations and reconstruction of 3D white matter pathways using the principal eigenvectors of the susceptibility tensor. In contrast to diffusion anisotropy, the main determinant factor of the susceptibility anisotropy in brain white matter is myelin. Another unique feature of the susceptibility anisotropy of white matter is its sensitivity to gadolinium-based contrast agents. Mechanistically, MRI-observed susceptibility anisotropy is mainly attributed to the highly ordered lipid molecules in the myelin sheath. STI provides a consistent interpretation of the dependence of phase and susceptibility on orientation at multiple scales. This article reviews the key experimental findings and physical theories that led to the development of STI, its practical implementations, and its applications for brain research. Copyright © 2016 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "92ec1f93124ddfa1faa1d7a3ab371935", "text": "We introduce a novel evolutionary algorithm (EA) with a semantic network-based representation. For enabling this, we establish new formulations of EA variation operators, crossover and mutation, that we adapt to work on semantic networks. The algorithm employs commonsense reasoning to ensure all operations preserve the meaningfulness of the networks, using ConceptNet and WordNet knowledge bases. The algorithm can be classified as a novel memetic algorithm (MA), given that (1) individuals represent pieces of information that undergo evolution, as in the original sense of memetics as it was introduced by Dawkins; and (2) this is different from existing MA, where the word “memetic” has been used as a synonym for local refinement after global optimization. For evaluating the approach, we introduce an analogical similarity-based fitness measure that is computed through structure mapping. This setup enables the open-ended generation of networks analogous to a given base network.", "title": "" }, { "docid": "7e459967f93c4cf0b432717aa41201e1", "text": "Paper describes the development of prototype that enables monitoring of heart rate and inter beat interval for several subjects. The prototype was realized using ESP8266 hardware modules, WebSocket library, nodejs and JavaScript. System architecture is described where nodejs server acts as the signal processing and GUI code provider for clients. Signal processing algorithm was implemented in JavaScript. Application GUI is presented which can be used on mobile devices. Several important parts of the code are described which illustrate the communication between ESP8266 modules, server and clients. Developed prototype shows one of the possible realizations of group monitoring of biomedical data.", "title": "" }, { "docid": "a3d10348d5f6e51fefb3f642098be32e", "text": "We propose a Convolutional Neural Network (CNN) based algorithm – StuffNet – for object detection. In addition to the standard convolutional features trained for region proposal and object detection [33], StuffNet uses convolutional features trained for segmentation of objects and 'stuff' (amorphous categories such as ground and water). Through experiments on Pascal VOC 2010, we show the importance of features learnt from stuff segmentation for improving object detection performance. StuffNet improves performance from 18.8% mAP to 23.9% mAP for small objects. We also devise a method to train StuffNet on datasets that do not have stuff segmentation labels. Through experiments on Pascal VOC 2007 and 2012, we demonstrate the effectiveness of this method and show that StuffNet also significantly improves object detection performance on such datasets.", "title": "" }, { "docid": "6cbdfa5b3cf8d64a9e62f8e0c9bc26aa", "text": "In this paper, a novel approach to video temporal decomposition into semantic units, termed scenes, is presented. In contrast to previous temporal segmentation approaches that employ mostly low-level visual or audiovisual features, we introduce a technique that jointly exploits low-level and high-level features automatically extracted from the visual and the auditory channel. This technique is built upon the well-known method of the scene transition graph (STG), first by introducing a new STG approximation that features reduced computational cost, and then by extending the unimodal STG-based temporal segmentation technique to a method for multimodal scene segmentation. The latter exploits, among others, the results of a large number of TRECVID-type trained visual concept detectors and audio event detectors, and is based on a probabilistic merging process that combines multiple individual STGs while at the same time diminishing the need for selecting and fine-tuning several STG construction parameters. The proposed approach is evaluated on three test datasets, comprising TRECVID documentary films, movies, and news-related videos, respectively. The experimental results demonstrate the improved performance of the proposed approach in comparison to other unimodal and multimodal techniques of the relevant literature and highlight the contribution of high-level audiovisual features toward improved video segmentation to scenes.", "title": "" }, { "docid": "d8a0fec69df5f8eeb2bb8e82484b8ac7", "text": "Traditionally, Information and Communication Technology (ICT) “has been segregated from the normal teaching classroom” [12], e.g. in computer labs. This has been changed with the advent of smaller devices like iPads. There is a shift from separating ICT and education to co-located settings in which digital technology becomes part of the classroom. This paper presents the results from a study about exploring digital didactical designs using iPads applied by teachers in schools. Classroom observations and interviews in iPad-classrooms in Danish schools have been done with the aim to provide empirical evidence on the co-evolutionary design of both, didactical designs and iPads. The Danish community Odder has 7 schools where around 200 teachers and 2,000 students aged 6-16 use iPads in a 1:1 iPad-program. Three key aspects could be explored: The teachers’ digital didactical designs embrace a) new learning goals where more than one correct answer exists, b) focus on producing knowledge in informal-in-formal learning spaces, c) making learning visible in different products (text, comics, podcasts etc.). The results show the necessity of rethinking traditional Didaktik towards Digital Didactics.", "title": "" }, { "docid": "05a95b62601fe8b31e0996f065b98b52", "text": "A method for efficiently constructing polar codes is presented and analyzed. Although polar codes are explicitly defined, straightforward construction is intractable since the resulting polar bit-channels have an output alphabet that grows exponentially with the code length. Thus, the core problem that needs to be solved is that of faithfully approximating a bit-channel with an intractably large alphabet by another channel having a manageable alphabet size. We devise two approximation methods which “sandwich” the original bit-channel between a degraded and an upgraded version thereof. Both approximations can be efficiently computed and turn out to be extremely close in practice. We also provide theoretical analysis of our construction algorithms, proving that for any fixed ε > 0 and all sufficiently large code lengths n, polar codes whose rate is within ε of channel capacity can be constructed in time and space that are both linear in n.", "title": "" }, { "docid": "70fd543752f17237386b3f8e99954230", "text": "Using Markov logic to integrate logical and distributional information in natural-language semantics results in complex inference problems involving long, complicated formulae. Current inference methods for Markov logic are ineffective on such problems. To address this problem, we propose a new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather than ground atoms. We also introduce a modified closed-world assumption that significantly reduces the size of the ground network, thereby making inference feasible. Our approach is evaluated on the recognizing textual entailment task, and experiments demonstrate its dramatic impact on the efficiency", "title": "" }, { "docid": "a094547d8ec7653b6f2754f0add1cfa3", "text": "We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning. MAC is a policy gradient algorithm that uses the agent’s explicit representation of all action values to estimate the gradient of the policy, rather than using only the actions that were actually executed. This significantly reduces variance in the gradient updates and removes the need for a variance reduction baseline. We show empirical results on two control domains where MAC performs as well as or better than other policy gradient approaches, and on five Atari games, where MAC is competitive with state-of-the-art policy search algorithms.", "title": "" }, { "docid": "297be5975e8ef2e687a8b905b21b9951", "text": "This paper considers transmit antenna selection (TAS) and receive generalized selection combining (GSC) for secure communication in the multiple-input-multiple-output wiretap channel, where confidential messages transmitted from an NA-antenna transmitter to an NB-antenna legitimate receiver are overheard by an NE-antenna eavesdropper. We assume that the main channel and the eavesdropper's channel undergo Nakagami-m fading with fading parameters mB and mE, respectively. In order to assess the secrecy performance, we present a new unifying framework for the average secrecy rate and the secrecy outage probability. We first derive expressions for the probability density function and the cumulative distribution function of the signal-to-noise ratio with TAS/GSC, from which we derive exact expressions for the average secrecy rate and the secrecy outage probability. We then derive compact expressions for the asymptotic average secrecy rate and the asymptotic secrecy outage probability for two distinct scenarios: 1) the legitimate receiver is located close to the transmitter, and 2) the legitimate receiver and the eavesdropper are located close to the transmitter. For these scenarios, we present new closed-form expressions for several key performance indicators: 1) the capacity slope and the power offset of the asymptotic average secrecy rate, and 2) the secrecy diversity order and the secrecy array gain of the asymptotic secrecy outage probability. For the first scenario, we confirm that the capacity slope is one and the secrecy diversity order is mBNBNA. For the second scenario, we confirm that the capacity slope and the secrecy diversity order collapse to zero.", "title": "" }, { "docid": "316e771f85676bdf85dfce1e4ea3eaa8", "text": "Stream processing is important for continuously transforming and analyzing the deluge of data that has revolutionized our world. Given the diversity of application domains, streaming applications must be both easy to write and performant. Both goals can be accomplished by high-level programming languages. Dedicated language syntax helps express stream programs clearly and concisely, whereas the compiler and runtime system of the language help optimize runtime performance. This paper describes the language runtime for the IBM Streams Processing Language (SPL) used to program the distributed IBM Streams platform. It gives a system overview and explains several language-based optimizations implemented in the SPL runtime: fusion, thread placement, fission, and transport optimizations.", "title": "" }, { "docid": "f64896f0eaf5becb7d9099c327bd6a59", "text": "Device-free gesture tracking is an enabling HCI mechanism for small wearable devices because fingers are too big to control the GUI elements on such small screens, and it is also an important HCI mechanism for medium-to-large size mobile devices because it allows users to provide input without blocking screen view. In this paper, we propose LLAP, a device-free gesture tracking scheme that can be deployed on existing mobile devices as software, without any hardware modification. We use speakers and microphones that already exist on most mobile devices to perform device-free tracking of a hand/finger. The key idea is to use acoustic phase to get fine-grained movement direction and movement distance measurements. LLAP first extracts the sound signal reflected by the moving hand/finger after removing the background sound signals that are relatively consistent over time. LLAP then measures the phase changes of the sound signals caused by hand/finger movements and then converts the phase changes into the distance of the movement. We implemented and evaluated LLAP using commercial-off-the-shelf mobile phones. For 1-D hand movement and 2-D drawing in the air, LLAP has a tracking accuracy of 3.5 mm and 4.6 mm, respectively. Using gesture traces tracked by LLAP, we can recognize the characters and short words drawn in the air with an accuracy of 92.3% and 91.2%, respectively.", "title": "" }, { "docid": "a58ede53f0f2452e60528d5a470c0d7e", "text": "Background. Controversies still prevail as to how exactly epigastric hernia occurs. Both the vascular lacunae hypothesis and the tendinous fibre decussation hypothesis have proved to be widely accepted as possible explanations for the etiology. Patient. We present a patient who suffered from early-onset epigastric hernia. Conclusions. We believe the identification of the ligamentum teres and its accompanying vessel at its fascial defect supports the vascular lacunae hypothesis. However, to further our understanding, biopsy of the linea alba in patients with epigastric hernias is indicated.", "title": "" }, { "docid": "6f5a3f7ddb99eee445d342e6235280c3", "text": "Although aesthetic experiences are frequent in modern life, there is as of yet no scientifically comprehensive theory that explains what psychologically constitutes such experiences. These experiences are particularly interesting because of their hedonic properties and the possibility to provide self-rewarding cognitive operations. We shall explain why modern art's large number of individualized styles, innovativeness and conceptuality offer positive aesthetic experiences. Moreover, the challenge of art is mainly driven by a need for understanding. Cognitive challenges of both abstract art and other conceptual, complex and multidimensional stimuli require an extension of previous approaches to empirical aesthetics. We present an information-processing stage model of aesthetic processing. According to the model, aesthetic experiences involve five stages: perception, explicit classification, implicit classification, cognitive mastering and evaluation. The model differentiates between aesthetic emotion and aesthetic judgments as two types of output.", "title": "" }, { "docid": "5441c49359d4446a51cea3f13991a7dc", "text": "Nowadays, smart composite materials embed miniaturized sensors for structural health monitoring (SHM) in order to mitigate the risk of failure due to an overload or to unwanted inhomogeneity resulting from the fabrication process. Optical fiber sensors, and more particularly fiber Bragg grating (FBG) sensors, outperform traditional sensor technologies, as they are lightweight, small in size and offer convenient multiplexing capabilities with remote operation. They have thus been extensively associated to composite materials to study their behavior for further SHM purposes. This paper reviews the main challenges arising from the use of FBGs in composite materials. The focus will be made on issues related to temperature-strain discrimination, demodulation of the amplitude spectrum during and after the curing process as well as connection between the embedded optical fibers and the surroundings. The main strategies developed in each of these three topics will be summarized and compared, demonstrating the large progress that has been made in this field in the past few years.", "title": "" }, { "docid": "af2779ab87ff707d51e735977a4fa0e2", "text": "The increasing availability of large motion databases, in addition to advancements in motion synthesis, has made motion indexing and classification essential for better motion composition. However, in order to achieve good connectivity in motion graphs, it is important to understand human behaviour; human movement though is complex and difficult to completely describe. In this paper, we investigate the similarities between various emotional states with regards to the arousal and valence of the Russell’s circumplex model. We use a variety of features that encode, in addition to the raw geometry, stylistic characteristics of motion based on Laban Movement Analysis (LMA). Motion capture data from acted dance performances were used for training and classification purposes. The experimental results show that the proposed features can partially extract the LMA components, providing a representative space for indexing and classification of dance movements with regards to the emotion. This work contributes to the understanding of human behaviour and actions, providing insights on how people express emotional states using their body, while the proposed features can be used as complement to the standard motion similarity, synthesis and classification methods.", "title": "" }, { "docid": "db158f806e56a1aae74aae15252703d2", "text": "Despite achieving impressive performance, state-of-the-art classifiers remain highly vulnerable to small, imperceptible, adversarial perturbations. This vulnerability has proven empirically to be very intricate to address. In this paper, we study the phenomenon of adversarial perturbations under the assumption that the data is generated with a smooth generative model. We derive fundamental upper bounds on the robustness to perturbations of any classification function, and prove the existence of adversarial perturbations that transfer well across different classifiers with small risk. Our analysis of the robustness also provides insights onto key properties of generative models, such as their smoothness and dimensionality of latent space. We conclude with numerical experimental results showing that our bounds provide informative baselines to the maximal achievable robustness on several datasets.", "title": "" } ]
scidocsrr
e7039d49d9949422b44e0a2def7834e2
Automatic Transcription of Guitar Chords and Fingering From Audio
[ { "docid": "e8933b0afcd695e492d5ddd9f87aeb81", "text": "This article proposes a method for the automatic transcription of the melody, bass line, and chords in polyphonic pop music. The method uses a frame-wise pitch-salience estimator as a feature extraction front-end. For the melody and bass-line transcription, this is followed by acoustic modeling of note events and musicological modeling of note transitions. The acoustic models include a model for the target notes (i.e., melody or bass notes) and a background model. The musicological model involves key estimation and note bigrams that determine probabilities for transitions between target notes. A transcription of the melody or the bass line is obtained using Viterbi search via the target and the background note models. The performance of the melody and the bass-line transcription is evaluated using approximately 8.5 hours of realistic polyphonic music. The chord transcription maps the pitch salience estimates to a pitch-class representation and uses trained chord models and chord-transition probabilities to produce a transcription consisting of major and minor triads. For chords, the evaluation material consists of the first eight Beatles albums. The method is computationally efficient and allows causal implementation, so it can process streaming audio. Transcription of music refers to the analysis of an acoustic music signal for producing a parametric representation of the signal. The representation may be a music score with a meticulous arrangement for each instrument or an approximate description of melody and chords in the piece, for example. The latter type of transcription is commonly used in commercial songbooks of pop music and is usually sufficient for musicians or music hobbyists to play the piece. On the other hand, more detailed transcriptions are often employed in classical music to preserve the exact arrangement of the composer.", "title": "" } ]
[ { "docid": "63c62168e217ed4c50cf5dba6a187722", "text": "Statistics is an important part in big data because many statistical methods are used for big data analysis. The aim of statistics is to estimate population using the sample extracted from the population, so statistics is to analyze not the population but the sample. But in big data environment, we can get the big data set closed to the population by the advanced computing systems such as cloud computing and high-speed internet. According to the circumstances, we can analyze entire part of big data like the population of statistics. But we may be impossible to analyze the entire data because of its huge data volume. So, in this paper, we propose a new analytical methodology for big data analysis in regression problem for reducing the computing burden. We call this a divided regression analysis. To verify the performance of our divided regression model, we carry out experiment and simulation.", "title": "" }, { "docid": "1184260e77b2f6eaab97c0b9e2a43afc", "text": "In pervasive and ubiquitous computing systems, human activity recognition has immense potential in a large number of application domains. Current activity recognition techniques (i) do not handle variations in sequence, concurrency and interleaving of complex activities; (ii) do not incorporate context; and (iii) require large amounts of training data. There is a lack of a unifying theoretical framework which exploits both domain knowledge and data-driven observations to infer complex activities. In this article, we propose, develop and validate a novel Context-Driven Activity Theory (CDAT) for recognizing complex activities. We develop a mechanism using probabilistic and Markov chain analysis to discover complex activity signatures and generate complex activity definitions. We also develop a Complex Activity Recognition (CAR) algorithm. It achieves an overall accuracy of 95.73% using extensive experimentation with real-life test data. CDAT utilizes context and links complex activities to situations, which reduces inference time by 32.5% and also reduces training data by 66%.", "title": "" }, { "docid": "1cf07400a152ea6bfac75c75bfb1eb7b", "text": "Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.", "title": "" }, { "docid": "bdfa9a484a2bca304c0a8bbd6dcd7f1a", "text": "We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.", "title": "" }, { "docid": "b8c5aa7628cf52fac71b31bb77ccfac0", "text": "Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster. Neural networks trained by stochastic gradient descent often degrade on old tasks when trained successively on new tasks with different data distributions. This phenomenon, referred to as catastrophic forgetting, is considered a major hurdle to learning with non-stationary data or sequences of new tasks, and prevents networks from continually accumulating knowledge and skills. We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence. Unlike most other work, we do not provide an explicit indication to the model of task boundaries, which is the most general circumstance for a learning agent exposed to continuous experience. While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution – that of using experience replay buffers for all past events – with a mixture of onand off-policy learning, leveraging behavioral cloning. We show that this strategy can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities. When buffer storage is constrained, we confirm that a simple mechanism for randomly discarding data allows a limited size buffer to perform almost as well as an unbounded one.", "title": "" }, { "docid": "a4e1f420dfc3b1b30a58ec3e60288761", "text": "Despite recent advances in uncovering the quantitative features of stationary human activity patterns, many applications, from pandemic prediction to emergency response, require an understanding of how these patterns change when the population encounters unfamiliar conditions. To explore societal response to external perturbations we identified real-time changes in communication and mobility patterns in the vicinity of eight emergencies, such as bomb attacks and earthquakes, comparing these with eight non-emergencies, like concerts and sporting events. We find that communication spikes accompanying emergencies are both spatially and temporally localized, but information about emergencies spreads globally, resulting in communication avalanches that engage in a significant manner the social network of eyewitnesses. These results offer a quantitative view of behavioral changes in human activity under extreme conditions, with potential long-term impact on emergency detection and response.", "title": "" }, { "docid": "073756896638d2846da173eec98bd8db", "text": "The DJI Phantom III drone has already been used for malicious activities (to drop bombs, remote surveillance and plane watching) in 2016 and 2017. At the time of writing, DJI was the drone manufacturer with the largest market share. Our work presents the primary thorough forensic analysis of the DJI Phantom III drone, and the primary account for proprietary file structures stored by the examined drone. It also presents the forensically sound open source tool DRone Open source Parser (DROP) that parses proprietary DAT files extracted from the drone's nonvolatile internal storage. These DAT files are encrypted and encoded. The work also shares preliminary findings on TXT files, which are also proprietary, encrypted, encoded, files found on the mobile device controlling the drone. These files provided a slew of data such as GPS locations, battery, flight time, etc. By extracting data from the controlling mobile device, and the drone, we were able to correlate data and link the user to a specific device based on extracted metadata. Furthermore, results showed that the best mechanism to forensically acquire data from the tested drone is to manually extract the SD card by disassembling the drone. Our findings illustrated that the drone should not be turned on as turning it on changes data on the drone by creating a new DAT file, but may also delete stored data if the drone's internal storage is full. © 2017 The Author(s). Published by Elsevier Ltd. on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).", "title": "" }, { "docid": "0a3713459412d3278a19a3ff8855a6ba", "text": "a Universidad Autónoma del Estado de Hidalgo, Escuela Superior de Tizayuca, Carretera Federal Pachuca – Tizayuca km 2.5, CP 43800, Tizayuca, Hidalgo, Mexico b Universidad Autónoma del Estado de México, Av. Jardín Zumpango s/n, Fraccionamiento El Tecojote, CP 56259, Texcoco-Estado de México, Mexico c Centro de Investigación y de Estudios Avanzados del IPN, Departamento de Computación, Av. Instituto Politécnico Nacional 2508, San Pedro Zacatenco, CP 07360, México DF, Mexico", "title": "" }, { "docid": "12b94323c586de18e8de02e5a065903d", "text": "Species of lactic acid bacteria (LAB) represent as potential microorganisms and have been widely applied in food fermentation worldwide. Milk fermentation process has been relied on the activity of LAB, where transformation of milk to good quality of fermented milk products made possible. The presence of LAB in milk fermentation can be either as spontaneous or inoculated starter cultures. Both of them are promising cultures to be explored in fermented milk manufacture. LAB have a role in milk fermentation to produce acid which is important as preservative agents and generating flavour of the products. They also produce exopolysaccharides which are essential as texture formation. Considering the existing reports on several health-promoting properties as well as their generally recognized as safe (GRAS) status of LAB, they can be widely used in the developing of new fermented milk products.", "title": "" }, { "docid": "a20302dfa51ad50db7d67526f9390743", "text": "Stochastic Gradient Descent (SGD) is a popular optimization method which has been applied to many important machine learning tasks such as Support Vector Machines and Deep Neural Networks. In order to parallelize SGD, minibatch training is often employed. The standard approach is to uniformly sample a minibatch at each step, which often leads to high variance. In this paper we propose a stratified sampling strategy, which divides the whole dataset into clusters with low within-cluster variance; we then take examples from these clusters using a stratified sampling technique. It is shown that the convergence rate can be significantly improved by the algorithm. Encouraging experimental results confirm the effectiveness of the proposed method.", "title": "" }, { "docid": "acf4645478c28811d41755b0ed81fb39", "text": "Make more knowledge even in less time every day. You may not always spend your time and money to go abroad and get the experience and knowledge by yourself. Reading is a good alternative to do in getting this desirable knowledge and experience. You may gain many things from experiencing directly, but of course it will spend much money. So here, by reading social network data analytics social network data analytics, you can take more advantages with limited budget.", "title": "" }, { "docid": "440858614aba25dfa9039b20a1caefc4", "text": "A natural image usually conveys rich semantic content and can be viewed from different angles. Existing image description methods are largely restricted by small sets of biased visual paragraph annotations, and fail to cover rich underlying semantics. In this paper, we investigate a semi-supervised paragraph generative framework that is able to synthesize diverse and semantically coherent paragraph descriptions by reasoning over local semantic regions and exploiting linguistic knowledge. The proposed Recurrent Topic-Transition Generative Adversarial Network (RTT-GAN) builds an adversarial framework between a structured paragraph generator and multi-level paragraph discriminators. The paragraph generator generates sentences recurrently by incorporating region-based visual and language attention mechanisms at each step. The quality of generated paragraph sentences is assessed by multi-level adversarial discriminators from two aspects, namely, plausibility at sentence level and topic-transition coherence at paragraph level. The joint adversarial training of RTT-GAN drives the model to generate realistic paragraphs with smooth logical transition between sentence topics. Extensive quantitative experiments on image and video paragraph datasets demonstrate the effectiveness of our RTT-GAN in both supervised and semi-supervised settings. Qualitative results on telling diverse stories for an image verify the interpretability of RTT-GAN.", "title": "" }, { "docid": "ff429302ec983dd1203ac6dd97506ef8", "text": "Financial crises have occurred for many centuries. They are often preceded by a credit boom and a rise in real estate and other asset prices, as in the current crisis. They are also often associated with severe disruption in the real economy. This paper surveys the theoretical and empirical literature on crises. The first explanation of banking crises is that they are a panic. The second is that they are part of the business cycle. Modeling crises as a global game allows the two to be unified. With all the liquidity problems in interbank markets that have occurred during the current crisis, there is a growing literature on this topic. Perhaps the most serious market failure associated with crises is contagion, and there are many papers on this important topic. The relationship between asset price bubbles, particularly in real estate, and crises is discussed at length. Disciplines Economic Theory | Finance | Finance and Financial Management This journal article is available at ScholarlyCommons: http://repository.upenn.edu/fnce_papers/403 Financial Crises: Theory and Evidence Franklin Allen University of Pennsylvania Ana Babus Cambridge University Elena Carletti European University Institute", "title": "" }, { "docid": "1de10e40580ba019045baaa485f8e729", "text": "Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies.", "title": "" }, { "docid": "b4b4a50e4fa554b8e155a18f80b5744e", "text": "Recent advances in software-defined networking (SDN), particularly OpenFlow [5], have made it possible to implement and deploy sophisticated network policies with relatively simple programs. The simplicity arises in large part due to a simple “match/action” interface that OpenFlow provides, by which a programmer can specify actions to take on packets that match particular characteristics (e.g., “forward all port-53 traffic on a faster path”). To date, however, the space of such policies that can be easily implemented in an SDN centers on the control plane—while OpenFlow provides a rich control plane API, it permits very narrow control on the data plane. Expanding the match/action interface could make it possible for network operators to implement more sophisticated policies, e.g., that perform deep packet inspection and operate at the application layer. Yet, expanding OpenFlow’s specification is an arduous process, requiring standardization and hardware support—going down that path would, we believe, ultimately result in vertically integrated hardware, the very fate that OpenFlow was arguably designed to avoid. On the other end of the spectrum we have middleboxes: computing devices that sit on traffic flows’ paths, and that have no inherent restrictions on what they can process or store. Middleboxes have historically been vertically integrated, thus, although middlebox manufacturers can create a wide range of data processing devices, network operators remain faced with several key challenges:", "title": "" }, { "docid": "07b889a2b1a18bc1f91021f3b889474a", "text": "In this study, we show a correlation between electrical properties (relative permittivity-εr and conductivity-σ) of blood plasma and plasma glucose concentration. In order to formulate that correlation, we performed electrical property measurements on blood samples collected from 10 adults between the ages of 18 and 40 at University of Alabama Birmingham (UAB) Children's hospital. The measurements are conducted between 500 MHz and 20 GHz band. Using the data obtained from measurements, we developed a single-pole Cole-Cole model for εr and σ as a function of plasma blood glucose concentration. To provide an application, we designed a microstrip patch antenna that can be used to predict the glucose concentration within a given plasma sample. Simulation results regarding antenna design and its performance are also presented.", "title": "" }, { "docid": "2e11a8170ec8b2547548091443d46cc6", "text": "This chapter presents the theory of the Core Elements of the Gaming Experience (CEGE). The CEGE are the necessary but not sufficient conditions to provide a positive experience while playing video-games. This theory, formulated using qualitative methods, is presented with the aim of studying the gaming experience objectively. The theory is abstracted using a model and implemented in questionnaire. This chapter discusses the formulation of the theory, introduces the model, and shows the use of the questionnaire in an experiment to differentiate between two different experiences. In loving memory of Samson Cairns 4.1 The Experience of Playing Video-games The experience of playing video-games is usually understood as the subjective relation between the user and the video-game beyond the actual implementation of the game. The implementation is bound by the speed of the microprocessors of the gaming console, the ergonomics of the controllers, and the usability of the interface. Experience is more than that, it is also considered as a personal relationship. Understanding this relationship as personal is problematic under a scientific scope. Personal and subjective knowledge does not allow a theory to be generalised or falsified (Popper 1994). In this chapter, we propose a theory for understanding the experience of playing video-games, or gaming experience, that can be used to assess and compare different experiences. This section introduces the approach taken towards understanding the gaming experience under the aforementioned perspective. It begins by presenting an E.H. Calvillo-Gámez (B) División de Nuevas Tecnologías de la Información, Universidad Politécnica de San Luis Potosí, San Luis Potosí, México e-mail: e.calvillo@upslp.edu.mx 47 R. Bernhaupt (ed.), Evaluating User Experience in Games, Human-Computer Interaction Series, DOI 10.1007/978-1-84882-963-3_4, C © Springer-Verlag London Limited 2010 48 E.H. Calvillo-Gámez et al. overview of video-games and user experience in order to familiarise the reader with such concepts. Last, the objective and overview of the whole chapter are presented. 4.1.", "title": "" }, { "docid": "f4e6c9e4ed147a7864bd28d533b8ac38", "text": "The Milky Way Galaxy contains an unknown number, N , of civilizations that emit electromagnetic radiation (of unknown wavelengths) over a finite lifetime, L. Here we are assuming that the radiation is not produced indefinitely, but within L as a result of some unknown limiting event. When a civilization stops emitting, the radiation continues traveling outward at the speed of light, c, but is confined within a shell wall having constant thickness, cL. We develop a simple model of the Galaxy that includes both the birthrate and detectable lifetime of civilizations to compute the possibility of a SETI detection at the Earth. Two cases emerge for radiation shells that are (1) thinner than or (2) thicker than the size of the Galaxy, corresponding to detectable lifetimes, L, less than or greater than the light-travel time, ∼ 100, 000 years, across the Milky Way, respectively. For case (1), each shell wall has a thickness smaller than the size of the Galaxy and intersects the galactic plane in a donut shape (annulus) that fills only a fraction of the Galaxy’s volume, inhibiting SETI detection. But the ensemble of such shell walls may still fill our Galaxy, and indeed may overlap locally, given a sufficiently high birthrate of detectable civilizations. In the second case, each radiation shell is thicker than the size of our Galaxy. Yet, the ensemble of walls may or may not yield a SETI detection depending on the civilization birthrate. We compare the number of different electromagnetic transmissions arriving at Earth to Drake’s N , the number of currently emitting civilizations, showing that they are equal to each other for both cases (1) and (2). However, for L < 100, 000 years, the transmissions arriving at Earth may come from distant civilizations long extinct, while civilizations still alive are sending signals yet to arrive.", "title": "" }, { "docid": "c467fe65c242436822fd72113b99c033", "text": "Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is a powerful technique for generating striking images of vector data. Based on local ltering of an input texture along a curved stream line segment in a vector eld, it is possible to depict directional information of the vector eld at pixel resolution. The methods suggested so far can handle structured grids only. Now we present an approach that works both on two-dimensional unstructured grids and directly on triangulated surfaces in three-dimensional space. Because unstructured meshes often occur in real applications, this feature makes LIC available for a number of new applications.", "title": "" }, { "docid": "45ff2c8f796eb2853f75bedd711f3be4", "text": "High-quality (<inline-formula> <tex-math notation=\"LaTeX\">$Q$ </tex-math></inline-formula>) oscillators are notorious for being extremely slow during startup. Their long startup time increases the average power consumption in duty-cycled systems. This paper presents a novel precisely timed energy injection technique to speed up the startup behavior of high-<inline-formula> <tex-math notation=\"LaTeX\">$Q$ </tex-math></inline-formula> oscillators. The proposed solution is also insensitive to the frequency variations of the injection signal over a wide enough range that makes it possible to employ an integrated oscillator to provide the injection signal. A theoretical analysis is carried out to calculate the optimal injection duration. As a proof-of-concept, the proposed technique is incorporated in the design of crystal oscillators and is realized in a TSMC 65-nm CMOS technology. To verify the robustness of our technique across resonator parameters and frequency variations, six crystal resonators from different manufacturers with different packagings and <inline-formula> <tex-math notation=\"LaTeX\">$Q$ </tex-math></inline-formula> factors were tested. The implemented IC includes multiple crystal oscillators at 1.84, 10, and 50 MHz frequencies, with measured startup times of 58, 10, and 2 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{s}$ </tex-math></inline-formula>, while consuming 6.7, 45.5, and 195 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{W}$ </tex-math></inline-formula> at steady state, respectively. To the authors’ best knowledge, this is the fastest, reported startup time in the literature, with >15<inline-formula> <tex-math notation=\"LaTeX\">$\\times$ </tex-math></inline-formula> improvement over prior art, while requiring the smallest startup energy (~12 nJ).", "title": "" } ]
scidocsrr
b83cf8d74e66ec2204a6b1f7ebb4321b
Fault-tolerant techniques for the Internet of Military Things
[ { "docid": "87ac799402c785e68db14636b0725523", "text": "One of the challenges of creating applications from confederations of Internet-enabled things is the complexity of having to deal with spontaneously interacting and partially available heterogeneous devices. In this paper we describe the features of the MAGIC Broker 2 (MB2) a platform designed to offer a simple and consistent programming interface for collections of things. We report on the key abstractions offered by the platform and report on its use for developing two IoT applications involving spontaneous device interaction: 1) mobile phones and public displays, and 2) a web-based sensor actuator network portal called Sense Tecnic (STS). We discuss how the MB2 abstractions and implementation have evolved over time to the current design. Finally we present a preliminary performance evaluation and report qualitatively on the developers' experience of using our platform.", "title": "" }, { "docid": "f527219bead3dd4d64132315a9f0ff77", "text": "Recently, the Internet of Things (IOT) has obtained rapid development and has a significant impact on the military field. This paper first proposes a conception of military internet of things (MIOT) and analyzes the architecture of MIOT in detail. Then, three modes of MIOT, i.e., information sensing, information transmission and information serving, are respectively studied to show various military domain applications. Finally, an application assumption of MIOT from the weapon control aspect is given to validate the proposed application modes.", "title": "" } ]
[ { "docid": "dea7d83ed497fc95f4948a5aa4787b18", "text": "The distinguishing feature of the Fog Computing (FC) paradigm is that FC spreads communication and computing resources over the wireless access network, so as to provide resource augmentation to resource and energy-limited wireless (possibly mobile) devices. Since FC would lead to substantial reductions in energy consumption and access latency, it will play a key role in the realization of the Fog of Everything (FoE) paradigm. The core challenge of the resulting FoE paradigm is tomaterialize the seamless convergence of three distinct disciplines, namely, broadband mobile communication, cloud computing, and Internet of Everything (IoE). In this paper, we present a new IoE architecture for FC in order to implement the resulting FoE technological platform. Then, we elaborate the related Quality of Service (QoS) requirements to be satisfied by the underlying FoE technological platform. Furthermore, in order to corroborate the conclusion that advancements in the envisioned architecture description, we present: (i) the proposed energy-aware algorithm adopt Fog data center; and, (ii) the obtained numerical performance, for a real-world case study that shows that our approach saves energy consumption impressively in theFog data Center compared with the existing methods and could be of practical interest in the incoming Fog of Everything (FoE) realm.", "title": "" }, { "docid": "2bfa12485b2a40b7b625bbee50bf0f3e", "text": "To understand and facilitate modal shift to more sustainable modes of transport there is a need to model accessibility and connectivity at an urban scale using data collection and modelling procedures that require less data and specialist input than traditional transport models. The research described in this paper uses spatial analysis modelling procedures based on space syntax to investigate the potential to model aggregate traffic flows at an urban scale. The research has demonstrated that space syntax modelling is an effective means of representing an urban scale motor traffic network, however, modifications to th e original model were required to achieve a correlation between modelled and measured motor traffic flow comparable to other modelling procedures. Weighting methods were tested with ‘boundary weighting’ found to be effective at representing traffic crossing the boundary of an isolated urban sub-area, but not so effective at an urban scale. ‘Road weighting’ was found to be effective in improving model performance by representing traffic flows along routes according to a national classification scheme. The modelling approach has the potential to be extremely useful at an early planning stage to represent changes to flows across the network and to be useful for different modes.", "title": "" }, { "docid": "af6b26efef62f3017a0eccc5d2ae3c33", "text": "Universal, intelligent, and multifunctional devices controlling power distribution and measurement will become the enabling technology of the Smart Grid ICT. In this paper, we report on a novel automation architecture which supports distributed multiagent intelligence, interoperability, and configurability and enables efficient simulation of distributed automation systems. The solution is based on the combination of IEC 61850 object-based modeling and interoperable communication with IEC 61499 function block executable specification. Using the developed simulation environment, we demonstrate the possibility of multiagent control to achieve self-healing grid through collaborative fault location and power restoration.", "title": "" }, { "docid": "4b9953e7ff548a0d1b09bca3c3f3c38f", "text": "Battery management system (BMS) is an integral part of any electrical vehicle, which ensures that the batteries are not subjected to conditions outside their specified safe operating conditions. Thus the safety of the battery as well as of the passengers depend on the design of the BMS. In the present work a preliminary work is carried out to simulate a typical BMS for hybrid electrical vehicle. The various functional blocks of the BMS are implemented in SIMULINK toolbox of MATLAB. The BMS proposed is equipped with a battery model in which SOC is used as one of the states to overcome the limitation of stand-alone coulomb counting method for SOC estimation. The parameters of the battery are extracted from experimental results and incorporated in the model. The simulation results are validated by experimental results.", "title": "" }, { "docid": "21ec8a3ea14829c0c21b4caaad08d508", "text": "OBJECTIVE\nWe investigated the effect of low-fat (2.5%) dahi containing probiotic Lactobacillus acidophilus and Lactobacillus casei on progression of high fructose-induced type 2 diabetes in rats.\n\n\nMETHODS\nDiabetes was induced in male albino Wistar rats by feeding 21% fructose in water. The body weight, food and water intakes, fasting blood glucose, glycosylated hemoglobin, oral glucose tolerance test, plasma insulin, liver glycogen content, and blood lipid profile were recorded. The oxidative status in terms of thiobarbituric acid-reactive substances and reduced glutathione contents in liver and pancreatic tissues were also measured.\n\n\nRESULTS\nValues for blood glucose, glycosylated hemoglobin, glucose intolerance, plasma insulin, liver glycogen, plasma total cholesterol, triacylglycerol, low-density lipoprotein cholesterol, very low-density lipoprotein cholesterol, and blood free fatty acids were increased significantly after 8 wk of high fructose feeding; however, the dahi-supplemented diet restricted the elevation of these parameters in comparison with the high fructose-fed control group. In contrast, high-density lipoprotein cholesterol decreased slightly and was retained in the dahi-fed group. The dahi-fed group also exhibited lower values of thiobarbituric acid-reactive substances and higher values of reduced glutathione in liver and pancreatic tissues compared with the high fructose-fed control group.\n\n\nCONCLUSION\nThe probiotic dahi-supplemented diet significantly delayed the onset of glucose intolerance, hyperglycemia, hyperinsulinemia, dyslipidemia, and oxidative stress in high fructose-induced diabetic rats, indicating a lower risk of diabetes and its complications.", "title": "" }, { "docid": "166fb2f5f0667e6c72ee06c7b18b303b", "text": "The goal of metalearning is to generate useful shifts of inductive bias by adapting the current learning strategy in a \\useful\" way. Our learner leads a single life during which actions are continually executed according to the system's internal state and current policy (a modiiable, probabilistic algorithm mapping environmental inputs and internal states to outputs and new internal states). An action is considered a learning algorithm if it can modify the policy. EEects of learning processes on later learning processes are measured using reward/time ratios. Occasional backtracking enforces success histories of still valid policy modiications corresponding to histories of lifelong reward accelerations. The principle allows for plugging in a wide variety of learning algorithms. In particular, it allows for embedding the learner's policy modiication strategy within the policy itself (self-reference). To demonstrate the principle's feasibility in cases where conventional reinforcement learning fails, we test it in complex, non-Markovian, changing environments (\\POMDPs\"). One of the tasks involves more than 10 13 states, two learners that both cooperate and compete, and strongly delayed reinforcement signals (initially separated by more than 300,000 time steps). The biggest diierence between time and space is that you can't reuse time.", "title": "" }, { "docid": "300e215e91bb49aef0fcb44c3084789e", "text": "We present an extension to the Tacotron speech synthesis architecture that learns a latent embedding space of prosody, derived from a reference acoustic representation containing the desired prosody. We show that conditioning Tacotron on this learned embedding space results in synthesized audio that matches the prosody of the reference signal with fine time detail even when the reference and synthesis speakers are different. Additionally, we show that a reference prosody embedding can be used to synthesize text that is different from that of the reference utterance. We define several quantitative and subjective metrics for evaluating prosody transfer, and report results with accompanying audio samples from single-speaker and 44-speaker Tacotron models on a prosody transfer task.", "title": "" }, { "docid": "54ed287c473d796c291afda23848338e", "text": "Shared memory and message passing are two opposing communication models for parallel multicomputer architectures. Comparing such architectures has been difficult, because applications must be hand-crafted for each architecture, often resulting in radically different sources for comparison. While it is clear that shared memory machines are currently easier to program, in the future, programs will be written in high-level languages and compiled to the specific parallel target, thus eliminating this difference.In this paper, we evaluate several parallel architecture alternatives --- message passing, NUMA, and cachecoherent shared memory --- for a collection of scientific benchmarks written in C*, a data-parallel language. Using a single suite of C* source programs, we compile each benchmark and simulate the interconnect for the alternative models. Our objective is to examine underlying, technology-independent costs inherent in each alternative. Our results show the relative work required to execute these data parallel programs on the different architectures, and point out where some models have inherent advantages for particular data-parallel program styles.", "title": "" }, { "docid": "f8cc1cf257711c83464a98b3d9167c94", "text": "A Software Repository is a collection of library files and function codes. Programmers and Engineers design develop and build software libraries in a continuous process. Selecting suitable function code from one among many in the repository is quite challenging and cumbersome as we need to analyze semantic issues in function codes or components. Clustering and Mining Software Components for efficient reuse is the current topic of interest among researchers in Software Reuse Engineering and Information Retrieval. A relatively less research work is contributed in this field and has a good scope in the future. In this paper, the main idea is to cluster the software components and form a subset of libraries from the available repository. These clusters thus help in choosing the required component with high cohesion and low coupling quickly and efficiently. We define a similarity function and use the same for the process of clustering the software components and for estimating the cost of new project. The approach carried out is a feature vector based approach. © 2014 The Authors. Published by Elsevier B.V. Selection and/or peer-review under responsibility of the organizers of ITQM 2014", "title": "" }, { "docid": "e33d34d0fbc19dbee009134368e40758", "text": "Quantum metrology exploits quantum phenomena to improve the measurement sensitivity. Theoretical analysis shows that quantum measurement can break through the standard quantum limits and reach super sensitivity level. Quantum radar systems based on quantum measurement can fufill not only conventional target detection and recognition tasks but also capable of detecting and identifying the RF stealth platform and weapons systems. The theoretical basis, classification, physical realization of quantum radar is discussed comprehensively in this paper. And the technology state and open questions of quantum radars is reviewed at the end.", "title": "" }, { "docid": "c2f46b2ed4e4306c26585f0aab275c66", "text": "We developed a crawler that can crawl YouTube and filter videos with only one person in front of the camera. This filter is implemented by extracting a number of frames from each video, and then using OpenCV’s (Itseez, 2015) Haar cascades to estimate how many faces are in each video. The crawler is supplied a search term which it then forwards to the YouTube Data API. The search terms provide a rough estimate of topics in the datasets, since they are directly connected to meta-data provided by the uploader. Figure 1 shows the distribution of the video topics used in CMU-MOSEI. The diversity of the video topics brings the following generalizability advantages: 1) the models trained on CMU-MOSEI will be generalizable across different topics and the notion of dataset domain is marginalized, 2) the diversity of topics bring variety of speakers, which allows the trained models to be generalizable across different speakers, and 3) the diversity in topics furthermore brings diversity in recording setups which allows the trained models to be generalizable across microphones and cameras with different intrinsic parameters. This diversity makes CMU-MOSEI a one-of-a-kind dataset for sentiment analysis and emotion recognition. Figure 1: The topics of videos in CMU-MOSEI, displayed as a Venn-style word cloud (Coppersmith and Kelly, 2014). Larger words indicate more videos from that topic.", "title": "" }, { "docid": "e0f797ff66a81b88bbc452e86864d7bc", "text": "A key challenge in radar micro-Doppler classification is the difficulty in obtaining a large amount of training data due to costs in time and human resources. Small training datasets limit the depth of deep neural networks (DNNs), and, hence, attainable classification accuracy. In this work, a novel method for diversifying Kinect-based motion capture (MOCAP) simulations of human micro-Doppler to span a wider range of potential observations, e.g. speed, body size, and style, is proposed. By applying three transformations, a small set of MOCAP measurements is expanded to generate a large training dataset for network initialization of a 30-layer deep residual neural network. Results show that the proposed training methodology and residual DNN yield improved bottleneck feature performance and the highest overall classification accuracy among other DNN architectures, including transfer learning from the 1.5 million sample ImageNet database.", "title": "" }, { "docid": "c20c8cda27cd9045e1265458a2ff0b88", "text": "Storing and sharing of medical data in the cloud environment, where computing resources including storage is provided by a third party service provider, raise serious concern of individual privacy for the adoption of cloud computing technologies. Existing privacy protection researches can be classified into three categories, i.e., privacy by policy, privacy by statistics, and privacy by cryptography. However, the privacy concerns and data utilization requirements on different parts of the medical data may be quite different. The solution for medical dataset sharing in the cloud should support multiple data accessing paradigms with different privacy strengths. The statistics or cryptography technology a multiple privacy demands, which blocks their application in the real-world cloud. This paper proposes a practical solution for privacy preserving medical record sharing for cloud computing. Based on the classification of the attributes of medical records, we use vertical partition of medical dataset to achieve the consideration of different parts of medical data with different privacy concerns. It mainly includes four components, i.e., (1) vertical data partition for medical data publishing, (2) data merging for mecial dataset accessing, (3) integrity checking, and (4) hybrid search across plaintext and ciphertext, where the statistical analysis and cryptography are innovatively combined together to provide multiple paradigms of balance between medical data utilization and privacy protection. A prototype system for the large scale medical data access and sharing is implemented. Extensive experiments show the effectiveness of our proposed solution. K eywords: privacy protection, cloud storage, integrity check, medical data sharing.", "title": "" }, { "docid": "368e72277a5937cb8ee94cea3fa11758", "text": "Monoclinic Gd2O3:Eu(3+) nanoparticles (NPs) possess favorable magnetic and optical properties for biomedical application. However, how to obtain small enough NPs still remains a challenge. Here we combined the standard solid-state reaction with the laser ablation in liquids (LAL) technique to fabricate sub-10 nm monoclinic Gd2O3:Eu(3+) NPs and explained their formation mechanism. The obtained Gd2O3:Eu(3+) NPs exhibit bright red fluorescence emission and can be successfully used as fluorescence probe for cells imaging. In vitro and in vivo magnetic resonance imaging (MRI) studies show that the product can also serve as MRI good contrast agent. Then, we systematically investigated the nanotoxicity including cell viability, apoptosis in vitro, as well as the immunotoxicity and pharmacokinetics assays in vivo. This investigation provides a platform for the fabrication of ultrafine monoclinic Gd2O3:Eu(3+) NPs and evaluation of their efficiency and safety in preclinical application.", "title": "" }, { "docid": "7963630b3864288ad0b2f4f219dc3ee4", "text": "The purpose of this study is to clarify theory and identify factors that could explain the level of continuance intention of e-shopping. A revised technology acceptance model integrates expectation confirmation theory and investigates effects of age differences. An online survey of internet shoppers in Saudi Arabia. Structural equation modelling and invariance analysis confirm model fit. The findings confirm that perceived usefulness, enjoyment and social pressure are determinants of e-shopping continuance. The structural weights are mostly equivalent between young and old but the regression path from perceived usefulness to social pressure is stronger for younger respondents. This research moves beyond e-shopping intentions to factors affecting eshopping continuance, explaining 55% of intention to continue shopping online. Online strategies cannot ignore direct and indirect effects on continuance intentions. The findings contribute to literature on internet shopping and continuance intentions in the context of Saudi Arabia.", "title": "" }, { "docid": "de4677a8bb9d1e43a4b6fe4f2e6b6106", "text": "Reinforcement learning (RL) has developed into a large research field. The current state-ofthe-art is comprised of several subfields dealing with, for example, hierarchical abstraction and relational representations. This overview is targeted at researchers interested in RL who want to know where to start when studying RL in general, and where to start within the field of RL when faced with specific problem domains. This overview is by no means complete, nor does it describe all relevant texts. In fact, there are many more. The main function of this overview is to provide a reasonable amount of good entry points into the rich field of RL. All texts are widely available and most of them are online. General and Introductory Texts There are many texts that introduce the exciting field of RL and Markov decision processes (see for example the mentioned PhD theses at the end of this overview). Furthermore, many recent AI and machine learning textbooks cover basic RL. Some of the core texts in the field are the following. I M. L. Puterman. Markov Decision Processes—Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, 1994 I D. P. Bertsekas and J. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, MA, 1996 I L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237–285, 1996 I S. S. Keerthi and B. Ravindran. Reinforcement learning. In E. Fiesler and R. Beale, editors, Handbook of Neural Computation, chapter C3. Institute of Physics and Oxford University Press, New York, New York, 1997 I R. S. Sutton and A. G. Barto. Reinforcement Learning: an Introduction. The MIT Press, Cambridge, 1998 I C. Boutilier, T. Dean, and S. Hanks. Decision theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research, 11:1–94, 1999 I M. van Otterlo. The Logic of Adaptive Behavior: Knowledge Representation and Algorithms for Adaptive Sequential Decision Making under Uncertainty in First-Order and Relational Domains. IOS Press, Amsterdam, The Netherlands, 2009 The book by Sutton and Barto is available online, for free. You can find it at http://www.cs.ualberta.ca/∼ sutton/book/the-book.html Function Approximation, Generalization and Abstraction Because most problems are too large to represent explicitly, the majority of techniques in current RL research employs some form of generalization, abstraction or function approximation. Ergo, there are innumerable texts that deal with these matters. Some interesting starting points are the following.", "title": "" }, { "docid": "2ed57c4430810b2b72a64f2315bf1160", "text": "This study was an attempt to identify the interlingual strategies employed to translate English subtitles into Persian and to determine their frequency, as well. Contrary to many countries, subtitling is a new field in Iran. The study, a corpus-based, comparative, descriptive, non-judgmental analysis of an English-Persian parallel corpus, comprised English audio scripts of five movies of different genres, with Persian subtitles. The study’s theoretical framework was based on Gottlieb’s (1992) classification of subtitling translation strategies. The results indicated that all Gottlieb’s proposed strategies were applicable to the corpus with some degree of variation of distribution among different film genres. The most frequently used strategy was “transfer” at 54.06%; the least frequently used strategies were “transcription” and “decimation” both at 0.81%. It was concluded that the film genre plays a crucial role in using different strategies.", "title": "" }, { "docid": "ac65c09468cd88765009abe49d9114cf", "text": "It is known that head gesture and brain activity can reflect some human behaviors related to a risk of accident when using machine-tools. The research presented in this paper aims at reducing the risk of injury and thus increase worker safety. Instead of using camera, this paper presents a Smart Safety Helmet (SSH) in order to track the head gestures and the brain activity of the worker to recognize anomalous behavior. Information extracted from SSH is used for computing risk of an accident (a safety level) for preventing and reducing injuries or accidents. The SSH system is an inexpensive, non-intrusive, non-invasive, and non-vision-based system, which consists of an Inertial Measurement Unit (IMU) and dry EEG electrodes. A haptic device, such as vibrotactile motor, is integrated to the helmet in order to alert the operator when computed risk level (fatigue, high stress or error) reaches a threshold. Once the risk level of accident breaks the threshold, a signal will be sent wirelessly to stop the relevant machine tool or process.", "title": "" }, { "docid": "31a1a5ce4c9a8bc09cbecb396164ceb4", "text": "In trying out this hypothesis we shall understand by attitude the positive or negative relationship of a person p to another person o or to an impersonal entity x which may be a situation, an event, an idea, or a thing, etc. Examples are: to like, to love, to esteem, to value, and their opposites. A positive relation of this kind will be written L, a negative one ~L. Thus, pLo means p likes, loves, or values o, or, expressed differently, o is positive for p.", "title": "" }, { "docid": "79218f4dfecdef0bd7df21aa4854af75", "text": "Multi-gigabit 60 GHz radios are expected to match QoS requirements of modern multimedia applications. Several published standards were defined based on performance evaluations over standard channel models. Unfortunately, those models, and most models available in the literature, do not take into account the behavior of 60 GHz channels at different carrier frequencies, thus no guidelines are provided for the selection of the best suitable frequency band for a given service. This paper analyzes the impact of changes in multipath profiles, due to both frequency and distance, on the BER performance achieved by IEEE 802.11ad 60 GHz radios. Our analysis is based on real experimental channel impulse responses recorded through an indoor measurement campaign in seven sub-bands from 54 to 65 GHz with a break at 60 GHz at distances from 1 to 5 m. The small-scale fading is modeled by Rician distributions with K-factors extracted from experimental data, which are shown to give good agreement with the empirical distributions. A strong dependence of performance on both frequency and distance due to the sole multipath is observed, which calls for an appropriate selection of the best suitable frequency band according to the service required by the current session over the 802.11ad link.", "title": "" } ]
scidocsrr
ec4e6441c6a922c97367ac6565eb6ff2
A Complex Event Processing Toolkit for Detecting Technical Chart Patterns
[ { "docid": "6033f644fb18ce848922a51d3b0000ab", "text": "This paper tests two of the simplest and most popular trading rules moving average and trading range break, by utilitizing a very long data series, the Dow Jones index from 1897 to 1986. Standard statistical analysis is extended through the use .of bootstrap techniques. Overall our results provide strong support for the technical strategies that are explored. The returns obtained from buy (sell) signals are not consistent with the three popular null models: the random walk, the AR(I) and the GARCH-M. Consistently, buy signals generate higher returns than sell signals. Moreover, returns following sell signals are negative which is not easily explained by any of the currently existing equilibrium models. Furthermore the returns following buy signals are less volatile than returns following sell signals. The term, \"technical analysis,\" is a general heading for a myriad of trading techniques. Technical analysts attempt to forecast prices by the study of past prices and a few other related summary statistics about security trading. They believe that shifts in supply and demand can be detected in charts of market action. Technical analysis is considered by many to be the original form of investment analysis, dating back to the 1800's. It came into widespread use before the period of extensive and fully disclosed financial information, which in turn enabled the practice of fnndamental analysis to develop. In the U.S., the use of trading rules to detect patterns in stock prices is probably as old as the stock market itself. The oldest technique is attributed to Charles Dow and is traced to the late 1800's. Many of the techniques used today have been utilized for over 60 years. These techniques for discovering hidden relations in stock returns can range from extremely simple to quite elaborate. The attitude of academics towards technical analysis, until recently, is well described by Malkiel(1981): \"Obviously, I am biased against the chartist. This is not only a personal predilection, but a professional one as well. Technical analysis is anathema to, the academic world. We love to pick onit. Our bullying tactics' are prompted by two considerations: (1) the method is patently false; and (2) it's easy to pick on. And while it may seem a bit unfair to pick on such a sorry target, just remember': His your money we are trying to save.\" , Nonetheless, technical analysis has been enjoying a renaissance on Wall Street. All major brokerage firms publish technical commentary on the market and individual securities\" and many of the newsletters published by various \"experts\" are based on technical analysis. In recent years the efficient market hypothesis has come under serious siege. Various papers suggested that stock returns are not fully explained by common risk measures. A significant relationship between expected return and fundamental variables such as price-earnings ratio, market-to, book ratio and size was documented. Another group ofpapers has uncovered systematic patterns in stock returns related to various calendar periods such as the weekend effect, the tnrn-of-the-month effect, the holiday effect and the, January effect. A line of research directly related to this work provides evidence of predictability of equity returns from past returns. De Bandt and Thaler(1985), Fama and French(1986), and Poterba and Summers(1988) find negative serial correlation in returns of individual stocks aid various portfolios over three to ten year intervals. Rosenberg, Reid, and Lanstein(1985) provide evidence for the presence of predictable return reversals on a monthly basis", "title": "" }, { "docid": "06db3ede44c48a09f8d280cf13bd8fd2", "text": "An increasing number of distributed applications requires processing continuously flowing data from geographically distributed sources at unpredictable rate to obtain timely responses to complex queries. Examples of such applications come from the most disparate fields: from wireless sensor networks to financial tickers, from traffic management to click stream inspection.\n These requirements led to the development of a number of systems specifically designed to process information as a flow according to a set of pre-deployed processing rules. We collectively call them Information Flow Processing (IFP) Systems. Despite having a common goal, IFP systems differ in a wide range of aspects, including architectures, data models, rule languages, and processing mechanisms.\n In this tutorial we draw a general framework to analyze and compare the results achieved so far in the area of IFP systems. This allows us to offer a systematic overview of the topic, favoring the communication between different communities, and highlighting a number of open issue that still need to be addressed in research.", "title": "" } ]
[ { "docid": "f9a9ed5f618e11ed2d10083954ac5e9f", "text": "This study utilized a mixed methods approach to examine the feasibility and acceptability of group compassion focused therapy for adults with intellectual disabilities (CFT-ID). Six participants with mild ID participated in six sessions of group CFT, specifically adapted for adults with ID. Session-by-session feasibility and acceptability measures suggested that participants understood the group content and process and experienced group sessions and experiential practices as helpful and enjoyable. Thematic analysis of focus groups identified three themes relating to (1) direct experiences of the group, (2) initial difficulties in being self-compassionate and (3) positive emotional changes. Pre- and post-group outcome measures indicated significant reductions in both self-criticism and unfavourable social comparisons. Results suggest that CFT can be adapted for individuals with ID and provide preliminary evidence that people with ID and psychological difficulties may experience a number of benefits from this group intervention.", "title": "" }, { "docid": "7e848e98909c69378f624ce7db31dbfa", "text": "Phenotypically identical cells can dramatically vary with respect to behavior during their lifespan and this variation is reflected in their molecular composition such as the transcriptomic landscape. Single-cell transcriptomics using next-generation transcript sequencing (RNA-seq) is now emerging as a powerful tool to profile cell-to-cell variability on a genomic scale. Its application has already greatly impacted our conceptual understanding of diverse biological processes with broad implications for both basic and clinical research. Different single-cell RNA-seq protocols have been introduced and are reviewed here-each one with its own strengths and current limitations. We further provide an overview of the biological questions single-cell RNA-seq has been used to address, the major findings obtained from such studies, and current challenges and expected future developments in this booming field.", "title": "" }, { "docid": "a116489210b010a07c6073f11aaee407", "text": "CONTEXT\nDespite the substantial amount of health-related information available on the Internet, little is known about the accessibility, quality, and reading grade level of that health information.\n\n\nOBJECTIVE\nTo evaluate health information on breast cancer, depression, obesity, and childhood asthma available through English- and Spanish-language search engines and Web sites.\n\n\nDESIGN AND SETTING\nThree unique studies were performed from July 2000 through December 2000. Accessibility of 14 search engines was assessed using a structured search experiment. Quality of 25 health Web sites and content provided by 1 search engine was evaluated by 34 physicians using structured implicit review (interrater reliability >0.90). The reading grade level of text selected for structured implicit review was established using the Fry Readability Graph method.\n\n\nMAIN OUTCOME MEASURES\nFor the accessibility study, proportion of links leading to relevant content; for quality, coverage and accuracy of key clinical elements; and grade level reading formulas.\n\n\nRESULTS\nLess than one quarter of the search engine's first pages of links led to relevant content (20% of English and 12% of Spanish). On average, 45% of the clinical elements on English- and 22% on Spanish-language Web sites were more than minimally covered and completely accurate and 24% of the clinical elements on English- and 53% on Spanish-language Web sites were not covered at all. All English and 86% of Spanish Web sites required high school level or greater reading ability.\n\n\nCONCLUSION\nAccessing health information using search engines and simple search terms is not efficient. Coverage of key information on English- and Spanish-language Web sites is poor and inconsistent, although the accuracy of the information provided is generally good. High reading levels are required to comprehend Web-based health information.", "title": "" }, { "docid": "f45a291e721f77868c45d42b1b8827c7", "text": "In this paper we present SAMSA, a new tool for the simulation of VHDL-AMS systems in Matlab. The goal is the definition of a VHDL framework in which analog/digital systems can be designed and simulated and new simulation techniques can be studied, exploiting both the powerful Matlab functions and Toolboxes.", "title": "" }, { "docid": "238ae411572961116e47b7f6ebce974c", "text": "Coercing new programmers to adopt disciplined development practices such as thorough unit testing is a challenging endeavor. Test-driven development (TDD) has been proposed as a solution to improve both software design and testing. Test-driven learning (TDL) has been proposed as a pedagogical approach for teaching TDD without imposing significant additional instruction time.\n This research evaluates the effects of students using a test-first (TDD) versus test-last approach in early programming courses, and considers the use of TDL on a limited basis in CS1 and CS2. Software testing, programmer productivity, programmer performance, and programmer opinions are compared between test-first and test-last programming groups. Results from this research indicate that a test-first approach can increase student testing and programmer performance, but that early programmers are very reluctant to adopt a test-first approach, even after having positive experiences using TDD. Further, this research demonstrates that TDL can be applied in CS1/2, but suggests that a more pervasive implementation of TDL may be necessary to motivate and establish disciplined testing practice among early programmers.", "title": "" }, { "docid": "d68147bf8637543adf3053689de740c3", "text": "In this paper, we do a research on the keyword extraction method of news articles. We build a candidate keywords graph model based on the basic idea of TextRank, use Word2Vec to calculate the similarity between words as transition probability of nodes' weight, calculate the word score by iterative method and pick the top N of the candidate keywords as the final results. Experimental results show that the weighted TextRank algorithm with correlation of words can improve performance of keyword extraction generally.", "title": "" }, { "docid": "c7c103a48a80ffee561a120913855758", "text": "We study parameter estimation in Nonlinear Factor Analysis (NFA) where the generative model is parameterized by a deep neural network. Recent work has focused on learning such models using inference (or recognition) networks; we identify a crucial problem when modeling large, sparse, highdimensional datasets – underfitting. We study the extent of underfitting, highlighting that its severity increases with the sparsity of the data. We propose methods to tackle it via iterative optimization inspired by stochastic variational inference (Hoffman et al. , 2013) and improvements in the sparse data representation used for inference. The proposed techniques drastically improve the ability of these powerful models to fit sparse data, achieving state-of-the-art results on a benchmark textcount dataset and excellent results on the task of top-N recommendation.", "title": "" }, { "docid": "c5e23d47c7bd82025744f57c7e88eee1", "text": "Improper design or use of blood collection devices can adversely affect the accuracy of laboratory test results. Vascular access devices, such as catheters and needles, exert shear forces during blood flow, which creates a predisposition to cell lysis. Components from blood collection tubes, such as stoppers, lubricants, surfactants, and separator gels, can leach into specimens and/or adsorb analytes from a specimen; special tube additives may also alter analyte stability. Because of these interactions with blood specimens, blood collection devices are a potential source of pre-analytical error in laboratory testing. Accurate laboratory testing requires an understanding of the complex interactions between collection devices and blood specimens. Manufacturers, vendors, and clinical laboratorians must consider the pre-analytical challenges in laboratory testing. Although other authors have described the effects of endogenous substances on clinical assay results, the effects/impact of blood collection tube additives and components have not been well systematically described or explained. This review aims to identify and describe blood collection tube additives and their components and the strategies used to minimize their effects on clinical chemistry assays.", "title": "" }, { "docid": "1eee94436ff7c65b18908dab7fbfb1c6", "text": "Many efforts have been made in recent years to tackle the unconstrained face recognition challenge. For the benchmark of this challenge, the Labeled Faces in theWild (LFW) database has been widely used. However, the standard LFW protocol is very limited, with only 3,000 genuine and 3,000 impostor matches for classification. Today a 97% accuracy can be achieved with this benchmark, remaining a very limited room for algorithm development. However, we argue that this accuracy may be too optimistic because the underlying false accept rate may still be high (e.g. 3%). Furthermore, performance evaluation at low FARs is not statistically sound by the standard protocol due to the limited number of impostor matches. Thereby we develop a new benchmark protocol to fully exploit all the 13,233 LFW face images for large-scale unconstrained face recognition evaluation under both verification and open-set identification scenarios, with a focus at low FARs. Based on the new benchmark, we evaluate 21 face recognition approaches by combining 3 kinds of features and 7 learning algorithms. The benchmark results show that the best algorithm achieves 41.66% verification rates at FAR=0.1%, and 18.07% open-set identification rates at rank 1 and FAR=1%. Accordingly we conclude that the large-scale unconstrained face recognition problem is still largely unresolved, thus further attention and effort is needed in developing effective feature representations and learning algorithms. We thereby release a benchmark tool to advance research in this field.", "title": "" }, { "docid": "d75029f4e132a82c5ef69775a9fe9f18", "text": "We conducted three experiments to investigate the effects of contours on the detection of data similarity with star glyph variations. A star glyph is a small, compact, data graphic that represents a multi-dimensional data point. Star glyphs are often used in small-multiple settings, to represent data points in tables, on maps, or as overlays on other types of data graphics. In these settings, an important task is the visual comparison of the data points encoded in the star glyph, for example to find other similar data points or outliers. We hypothesized that for data comparisons, the overall shape of a star glyph-enhanced through contour lines-would aid the viewer in making accurate similarity judgments. To test this hypothesis, we conducted three experiments. In our first experiment, we explored how the use of contours influenced how visualization experts and trained novices chose glyphs with similar data values. Our results showed that glyphs without contours make the detection of data similarity easier. Given these results, we conducted a second study to understand intuitive notions of similarity. Star glyphs without contours most intuitively supported the detection of data similarity. In a third experiment, we tested the effect of star glyph reference structures (i.e., tickmarks and gridlines) on the detection of similarity. Surprisingly, our results show that adding reference structures does improve the correctness of similarity judgments for star glyphs with contours, but not for the standard star glyph. As a result of these experiments, we conclude that the simple star glyph without contours performs best under several criteria, reinforcing its practice and popularity in the literature. Contours seem to enhance the detection of other types of similarity, e. g., shape similarity and are distracting when data similarity has to be judged. Based on these findings we provide design considerations regarding the use of contours and reference structures on star glyphs.", "title": "" }, { "docid": "f9b7965888e180c6b07764dae8433a9d", "text": "Job recommender systems are designed to suggest a ranked list of jobs that could be associated with employee's interest. Most of existing systems use only one approach to make recommendation for all employees, while a specific method normally is good enough for a group of employees. Therefore, this study proposes an adaptive solution to make job recommendation for different groups of user. The proposed methods are based on employee clustering. Firstly, we group employees into different clusters. Then, we select a suitable method for each user cluster based on empirical evaluation. The proposed methods include CB-Plus, CF-jFilter and HyR-jFilter have applied for different three clusters. Empirical results show that our proposed methods is outperformed than traditional methods.", "title": "" }, { "docid": "ebc7f0693527eb6186fe56ef847581b3", "text": "WITH THE ADVENT OF CENTRALized data warehouses, where data might be stored as electronic documents or as text fields in databases, text mining has increased in importance and economic value. One important goal in text mining is automatic classification of electronic documents. Computer programs scan text in a document and apply a model that assigns the document to one or more prespecified topics. Researchers have used benchmark data, such as the Reuters-21578 test collection, to measure advances in automated text categorization. Conventional methods such as decision trees have had competitive, but not optimal, predictive performance. Using the Reuters collection, we show that adaptive resampling techniques can improve decision-tree performance and that relatively small, pooled local dictionaries are effective. We’ve applied these techniques to online banking applications to enhance automated e-mail routing.", "title": "" }, { "docid": "abe729a351eb9dbc1688abe5133b28c2", "text": "C. H. Tian B. K. Ray J. Lee R. Cao W. Ding This paper presents a framework for the modeling and analysis of business model designs involving a network of interconnected business entities. The framework includes an ecosystem-modeling component, a simulation component, and a serviceanalysis component, and integrates methods from value network modeling, game theory analysis, and multiagent systems. A role-based paradigm is introduced for characterizing ecosystem entities in order to easily allow for the evolution of the ecosystem and duplicated functionality for entities. We show how the framework can be used to provide insight into value distribution among the entities and evaluation of business model performance under different scenarios. The methods are illustrated using a case study of a retail business-to-business service ecosystem.", "title": "" }, { "docid": "1adce1015993969499b98d8977dabd73", "text": "Cloud computing o‚ers the economies of scale for computational resources with the ease of management, elasticity, and fault tolerance. To take advantage of these bene€ts, many enterprises are contemplating to outsource the middlebox processing services in the cloud. However, middleboxes that process con€dential and private data cannot be securely deployed in the untrusted environment of the (edge) cloud. To securely outsource middleboxes to the cloud, the stateof-the-art systems advocate network processing over the encrypted trac. Unfortunately, these systems support only restrictive middlebox functionalities, and incur prohibitively high overheads due to the complex computations involved over the encrypted trac. Œis motivated the design of Slick—a secure middlebox framework for deploying high-performance Network Functions (NFs) on untrusted commodity servers. Slick exposes a generic interface based on Click to design and implement a wide-range of NFs using its out-of-the box elements and C++ extensions. Slick leverages Scone (a shielded execution framework based on Intel SGX) and Intel DPDK to securely process con€dential data at line rate. More speci€cally, Slick provides hardware-assisted memory protection, and con€guration and aŠestation service for seamless and veri€able deployment of middleboxes. We have also added several new features for commonly required functionalities: new specialized Click elements for secure packet processing, secure shared memory packet transfer for NFs chaining, secure state persistence, an ecient on-NIC timer for SGX enclaves, and memory safety against DPDK-speci€c Iago aŠacks. Furthermore, we have implemented several SGX-speci€c optimizations in Slick. Our evaluation shows that Slick achieves near-native throughput and latency.", "title": "" }, { "docid": "1cf94a4f146ac1793574b848ce5132a3", "text": "In this paper we describe the Burrows-Wheeler Transform (BWT) a completely new approach to data compression which is the basis of some of the best compressors available today. Although it is easy to intuitively understand why the BWT helps compression, the analysis of BWT-based algorithms requires a careful study of every single algorithmic component. We describe two algorithms which use the BWT and we show that their compression ratio can be bounded in terms of the k-th order empirical entropy of the input string for any k ≥ 0. Intuitively, this means that these algorithms are able to make use of all the regularity which is in the input string. We also discuss some of the algorithmic issues which arise in the computation of the BWT, and we describe two variants of the BWT which promise interesting developments.", "title": "" }, { "docid": "2a244146b1cf3433b2e506bdf966e134", "text": "The rate of detection of thyroid nodules and carcinomas has increased with the widespread use of ultrasonography (US), which is the mainstay for the detection and risk stratification of thyroid nodules as well as for providing guidance for their biopsy and nonsurgical treatment. The Korean Society of Thyroid Radiology (KSThR) published their first recommendations for the US-based diagnosis and management of thyroid nodules in 2011. These recommendations have been used as the standard guidelines for the past several years in Korea. Lately, the application of US has been further emphasized for the personalized management of patients with thyroid nodules. The Task Force on Thyroid Nodules of the KSThR has revised the recommendations for the ultrasound diagnosis and imaging-based management of thyroid nodules. The review and recommendations in this report have been based on a comprehensive analysis of the current literature and the consensus of experts.", "title": "" }, { "docid": "f9dc4c6277ad29a757dedf26f3572dce", "text": "The transdisciplinary research project Virtopsy is dedicated to implementing modern imaging techniques into forensic medicine and pathology in order to augment current examination techniques or even to offer alternative methods. Our project relies on three pillars: three-dimensional (3D) surface scanning for the documentation of body surfaces, and both multislice computed tomography (MSCT) and magnetic resonance imaging (MRI) to visualise the internal body. Three-dimensional surface scanning has delivered remarkable results in the past in the 3D documentation of patterned injuries and of objects of forensic interest as well as whole crime scenes. Imaging of the interior of corpses is performed using MSCT and/or MRI. MRI, in addition, is also well suited to the examination of surviving victims of assault, especially choking, and helps visualise internal injuries not seen at external examination of the victim. Apart from the accuracy and three-dimensionality that conventional documentations lack, these techniques allow for the re-examination of the corpse and the crime scene even decades later, after burial of the corpse and liberation of the crime scene. We believe that this virtual, non-invasive or minimally invasive approach will improve forensic medicine in the near future.", "title": "" }, { "docid": "a3bc5399438d18a399e1d951dd0ec8c9", "text": "Electrical power systems play a critical role in spacecraft and aircraft. This paper discusses our development of a diagnostic capability for an electrical power system testbed, ADAPT, using probabilistic techniques. In the context of ADAPT, we present two challenges, regarding modelling and real-time performance, often encountered in real-world diagnostic applications. To meet the modelling challenge, we discuss our novel high-level speci cation language which supports autogeneration of Bayesian networks. To meet the real-time challenge, we compile Bayesian networks into arithmetic circuits. Arithmetic circuits typically have small footprints and are optimized for the real-time avionics systems found in spacecraft and aircraft. Using our approach, we present how Bayesian networks with over 400 nodes are auto-generated and then compiled into arithmetic circuits. Using real-world data from ADAPT as well as simulated data, we obtain average inference times smaller than one millisecond when computing diagnostic queries using arithmetic circuits that model our realworld electrical power system.", "title": "" }, { "docid": "686045e2dae16aba16c26b8ccd499731", "text": "It has been argued that platform technology owners cocreate business value with other firms in their platform ecosystems by encouraging complementary invention and exploiting indirect network effects. In this study, we examine whether participation in an ecosystem partnership improves the business performance of small independent software vendors (ISVs) in the enterprise software industry and how appropriability mechanisms influence the benefits of partnership. By analyzing the partnering activities and performance indicators of a sample of 1,210 small ISVs over the period 1996–2004, we find that joining a major platform owner’s platform ecosystem is associated with an increase in sales and a greater likelihood of issuing an initial public offering (IPO). In addition, we show that these impacts are greater when ISVs have greater intellectual property rights or stronger downstream capabilities. This research highlights the value of interoperability between software products, and stresses that value cocreation and appropriation are not mutually exclusive strategies in interfirm collaboration.", "title": "" }, { "docid": "edeb8c2f5dba5494964dca9b0e160eb0", "text": "This paper presents the design and the clinical validation of an upper-limb force-feedback exoskeleton, the L-EXOS, for robotic-assisted rehabilitation in virtual reality (VR). The L-EXOS is a five degrees of freedom exoskeleton with a wearable structure and anthropomorphic workspace that can cover the full range of motion of human arm. A specific VR application focused on the reaching task was developed and evaluated on a group of eight post-stroke patients, to assess the efficacy of the system for the rehabilitation of upper limb. The evaluation showed a significant reduction of the performance error in the reaching task (paired t-test, p < 0.02).", "title": "" } ]
scidocsrr
73ccf6a6cd5f4e9355be84eba2382026
The State of the art in distributed query processing
[ { "docid": "ec5abeb42b63ed1976cd47d3078c35c9", "text": "In semistructured data, the information that is normally associated with a schema is contained within the data, which is sometimes called “self-describing”. In some forms of semistructured data there is no separate schema, in others it exists but only places loose constraints on the data. Semistructured data has recently emerged as an important topic of study for a variety of reasons. First, there are data sources such as the Web, which we would like to treat as databases but which cannot be constrained by a schema. Second, it may be desirable to have an extremely flexible format for data exchange between disparate databases. Third, even when dealing with structured data, it may be helpful to view it. as semistructured for the purposes of browsing. This tutorial will cover a number of issues surrounding such data: finding a concise formulation, building a sufficiently expressive language for querying and transformation, and optimizat,ion problems.", "title": "" } ]
[ { "docid": "e292d4af3c77a11e8e2013fca0c8fb04", "text": "We present in this paper experiments on Table Recognition in hand-written register books. We first explain how the problem of row and column detection is modelled, and then compare two Machine Learning approaches (Conditional Random Field and Graph Convolutional Network) for detecting these table elements. Evaluation was conducted on death records provided by the Archives of the Diocese of Passau. With an F-1 score of 89, both methods provide a quality which allows for Information Extraction. Software and dataset are open source/data.", "title": "" }, { "docid": "65b9bef6e27683257a67e75a51a47ea0", "text": "This paper describes a conceptual approach to individual and organizational competencies needed for Open Innovation (OI) using a new ambidexterity model. It starts from the assumption that the entire innovation process is rarely open by all means, as the OI concept may suggest. It rather takes into consideration that in practice especially for early phases of the innovation process the organization and their innovation actors are opening up for new ways of joint ideation, collaboration etc. to gain a maximum of explorative performance and effectiveness. Though, when it comes to committing considerable resources to development and implementation activities, the innovation process usually closes step by step as efficiency criteria gain ground for a maximum of knowledge exploitation. The ambidexterity model of competences for OI refers to these tensions and provides a new framework to understand the needs of industry and Higher Education Institutes (HEI) to develop appropriate exploration and exploitation competencies for OI.", "title": "" }, { "docid": "39cad8dd6ad23ad9d4f98f3905ac29c2", "text": "Estimating the disparity and normal direction of one pixel simultaneously, instead of only disparity, also known as 3D label methods, can achieve much higher subpixel accuracy in the stereo matching problem. However, it is extremely difficult to assign an appropriate 3D label to each pixel from the continuous label space $\\mathbb {R}^{3}$ while maintaining global consistency because of the infinite parameter space. In this paper, we propose a novel algorithm called PatchMatch-based superpixel cut to assign 3D labels of an image more accurately. In order to achieve robust and precise stereo matching between local windows, we develop a bilayer matching cost, where a bottom–up scheme is exploited to design the two layers. The bottom layer is employed to measure the similarity between small square patches locally by exploiting a pretrained convolutional neural network, and then, the top layer is developed to assemble the local matching costs in large irregular windows induced by the tangent planes of object surfaces. To optimize the spatial smoothness of local assignments, we propose a novel strategy to update 3D labels. In the procedure of optimization, both segmentation information and random refinement of PatchMatch are exploited to update candidate 3D label set for each pixel with high probability of achieving lower loss. Since pairwise energy of general candidate label sets violates the submodular property of graph cut, we propose a novel multilayer superpixel structure to group candidate label sets into candidate assignments, which thereby can be efficiently fused by $\\alpha $ -expansion graph cut. Extensive experiments demonstrate that our method can achieve higher subpixel accuracy in different data sets, and currently ranks first on the new challenging Middlebury 3.0 benchmark among all the existing methods.", "title": "" }, { "docid": "842202ed67b71c91630fcb63c4445e38", "text": "Yaumatei Dermatology Clinic, 12/F Yaumatei Specialist Clinic (New Extension), 143 Battery Street, Yaumatei, Kowloon, Hong Kong A 46-year-old Chinese man presented with one year history of itchy verrucous lesions over penis and scrotum. Skin biopsy confirmed epidermolytic acanthoma. Epidermolytic acanthoma is a rare benign tumour. Before making such a diagnosis, exclusion of other diseases, especially genital warts and bowenoid papulosis is necessary. Treatment of multiple epidermolytic acanthoma remains unsatisfactory.", "title": "" }, { "docid": "3a84567c28d6a59271334594307263a5", "text": "Comprehension difficulty was rated for metaphors of the form Noun1-is-aNoun2; in addition, participants completed frames of the form Noun1-is-________ with their literal interpretation of the metaphor. Metaphor comprehension was simulated with a computational model based on Latent Semantic Analysis. The model matched participants’ interpretations for both easy and difficult metaphors. When interpreting easy metaphors, both the participants and the model generated highly consistent responses. When interpreting difficult metaphors, both the participants and the model generated disparate responses.", "title": "" }, { "docid": "ee4787fbb7302e37bd753d795c26c2d7", "text": "BACKGROUND\nPredictive modeling is fundamental for extracting value from large clinical data sets, or \"big clinical data,\" advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise.\n\n\nMETHODS\nThis paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data.\n\n\nRESULTS\nThe paper describes MLBCD's design in detail.\n\n\nCONCLUSIONS\nBy making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care.", "title": "" }, { "docid": "b96382c4a92391e40264d28fbb73580a", "text": "Today's smartphone operating systems frequently fail to provide users with adequate control over and visibility into how third-party applications use their privacy-sensitive data. We address these shortcomings with TaintDroid, an efficient, systemwide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid provides real-time analysis by leveraging Android's virtualized execution environment. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, we found 68 instances of misappropriation of users' location and device identification information across 20 applications. Monitoring sensitive data with TaintDroid provides informed use of third-party applications for phone users and valuable input for smartphone security service firms seeking to identify misbehaving applications.", "title": "" }, { "docid": "a74b85b37b7ffc8f4a96af0507e44543", "text": "Previous efforts suggest that occurrence of pain can be detected from the face. Can intensity of pain be detected as well? The Prkachin and Solomon Pain Intensity (PSPI) metric was used to classify four levels of pain intensity (none, trace, weak, and strong) in 25 participants with previous shoulder injury (McMaster-UNBC Pain Archive). Participants were recorded while they completed a series of movements of their affected and unaffected shoulders. From the video recordings, canonical normalized appearance of the face (CAPP) was extracted using active appearance modeling. To control for variation in face size, all CAPP were rescaled to 96x96 pixels. CAPP then was passed through a set of Log-Normal filters consisting of 7 frequencies and 15 orientations to extract 9216 features. To detect pain level, 4 support vector machines (SVMs) were separately trained for the automatic measurement of pain intensity on a frame-by-frame level using both 5-folds cross-validation and leave-one-subject-out cross-validation. F1 for each level of pain intensity ranged from 91% to 96% and from 40% to 67% for 5-folds and leave-one-subject-out cross-validation, respectively. Intra-class correlation, which assesses the consistency of continuous pain intensity between manual and automatic PSPI was 0.85 and 0.55 for 5-folds and leave-one-subject-out cross-validation, respectively, which suggests moderate to high consistency. These findings show that pain intensity can be reliably measured from facial expression in participants with orthopedic injury.", "title": "" }, { "docid": "f1dc6bc187668d773a193f01ef79fd5c", "text": "Nowadays, the research on robot on-map localization while using landmarks is more intensively dealing with visual code recognition. One of the most popular landmarks of this type is the QR-code. This paper is devoted to the experimental evaluation of vision-based on-map localization procedures that apply QR-codes or NAO marks, as implemented in service robot control systems. In particular, the NAO humanoid robot is our test-bed platform, while the use of robotic systems for hazard detection is the motivation of this study. Especially, the robot can be a useful aid for elderly people affected by dementia and cognitive disorientation. The detection of the door opening is assumed to be important to ensure safety in the home environment. Thus, the paper focus on door opening detection while using QR-codes.", "title": "" }, { "docid": "947fc87db2a56314f45eb212c9dd42dc", "text": "CDStore is a unified, multicloud storage solution for users to outsource backup data with reliability, security, and cost-efficiency guarantees. CDStore builds on an augmented secret-sharing scheme called convergent dispersal, which supports deduplication by using deterministic, content-derived hashes as input to secret sharing. CDStore's design is presented here, with an emphasis on how it combines convergent dispersal with two-stage deduplication to achieve both bandwidth and storage savings while robustly diverting side-channel attacks (launched by malicious users on the client side). A cost analysis shows that CDStore yields significant savings over baseline cloud storage solutions.", "title": "" }, { "docid": "cceec94ed2462cd657be89033244bbf9", "text": "This paper examines how student effort, consistency, motivation, and marginal learning, influence student grades in an online course. We use data from eleven Microeconomics courses taught online for a total of 212 students. Our findings show that consistency, or less time variation, is a statistically significant explanatory variable, whereas effort, or total minutes spent online, is not. Other independent variables include GPA and the difference between a pre-test and a post-test. The GPA is used as a measure of motivation, and the difference between a posttest and pre-test as marginal learning. As expected, the level of motivation is found statistically significant at a 99% confidence level, and marginal learning is also significant at a 95% level.", "title": "" }, { "docid": "5c8570045e83b72643f1ac99018351ea", "text": "OBJECTIVES\nAlthough anxiety exists concerning the perceived risk of transmission of bloodborne viruses after community-acquired needlestick injuries, seroconversion seems to be rare. The objectives of this study were to describe the epidemiology of pediatric community-acquired needlestick injuries and to estimate the risk of seroconversion for HIV, hepatitis B virus, and hepatitis C virus in these events.\n\n\nMETHODS\nThe study population included all of the children presenting with community-acquired needlestick injuries to the Montreal Children's Hospital between 1988 and 2006 and to Hôpital Sainte-Justine between 1995 and 2006. Data were collected prospectively at Hôpital Sainte-Justine from 2001 to 2006. All of the other data were reviewed retrospectively by using a standardized case report form.\n\n\nRESULTS\nA total of 274 patients were identified over a period of 19 years. Mean age was 7.9 +/- 3.4 years. A total of 176 (64.2%) were boys. Most injuries occurred in streets (29.2%) or parks (24.1%), and 64.6% of children purposely picked up the needle. Only 36 patients (13.1%) noted blood on the device. Among the 230 patients not known to be immune for hepatitis B virus, 189 (82.2%) received hepatitis B immunoglobulin, and 213 (92.6%) received hepatitis B virus vaccine. Prophylactic antiretroviral therapy was offered beginning in 1997. Of the 210 patients who presented thereafter, 82 (39.0%) received chemoprophylaxis, of whom 69 (84.1%) completed a 4-week course of therapy. The use of a protease inhibitor was not associated with a significantly higher risk of adverse effects or early discontinuation of therapy. At 6 months, 189 were tested for HIV, 167 for hepatitis B virus, and 159 for hepatitis C virus. There were no seroconversions.\n\n\nCONCLUSIONS\nWe observed no seroconversions in 274 pediatric community-acquired needlestick injuries, thereby confirming that the risk of transmission of bloodborne viruses in these events is very low.", "title": "" }, { "docid": "2bc11bc1f29594d60a5f110dc499888f", "text": "Our previous research demonstrated high, sustained satiety effects of stabilized food foams relative to their non-aerated compositions. Here we test if the energy and macronutrients in a stabilized food foam are critical for its previously demonstrated satiating effects. In a randomized, crossover design, 72 healthy subjects consumed 400 mL of each of four foams, one per week over four weeks, 150 min after a standardized breakfast. Appetite ratings were collected for 180 min post-foam. The reference was a normal energy food foam (NEF1, 280 kJ/400 mL) similar to that used in our previous research. This was compared to a very low energy food foam (VLEF, 36 kJ/400 mL) and 2 alternative normal energy foams (NEF2 and NEF3) testing possible effects of compositional differences other than energy (i.e. emulsifier and carbohydrate source). Appetite ratings were quantified as area under the curve (AUC) and time to return to baseline (TTRTB). Equivalence to NEF1 was predefined as the 90% confidence interval of between-treatment differences in AUC being within -5 to +5 mm/min. All treatments similarly affected appetite ratings, with mean AUC for fullness ranging between 49.1 and 52.4 mm/min. VLEF met the statistical criterion for equivalence to NEF1 for all appetite AUC ratings, but NEF2 and NEF3 did not. For all foams the TTRTB for satiety and fullness were consistently between 150 and 180 min, though values were shortest for NEF2 and especially NEF3 foams for most appetite scales. In conclusion, the high, sustained satiating effects of these food foams are independent of energy and macronutrient content at the volumes tested.", "title": "" }, { "docid": "c995426196ad943df2f5a4028a38b781", "text": "Today it is quite common for people to exchange hundreds of comments in online conversations (e.g., blogs). Often, it can be very difficult to analyze and gain insights from such long conversations. To address this problem, we present a visual text analytic system that tightly integrates interactive visualization with novel text mining and summarization techniques to fulfill information needs of users in exploring conversations. At first, we perform a user requirement analysis for the domain of blog conversations to derive a set of design principles. Following these principles, we present an interface that visualizes a combination of various metadata and textual analysis results, supporting the user to interactively explore the blog conversations. We conclude with an informal user evaluation, which provides anecdotal evidence about the effectiveness of our system and directions for further design.", "title": "" }, { "docid": "4b7a885d463022a1792d99ff0c76be72", "text": "Emerging applications in sensor systems and network-wide IP traffic analysis present many technical challenges. They need distributed monitoring and continuous tracking of events. They have severe resource constraints not only at each site in terms of per-update processing time and archival space for highspeed streams of observations, but also crucially, communication constraints for collaborating on the monitoring task. These elements have been addressed in a series of recent works. A fundamental issue that arises is that one cannot make the \"uniqueness\" assumption on observed events which is present in previous works, since widescale monitoring invariably encounters the same events at different points. For example, within the network of an Internet Service Provider packets of the same flow will be observed in different routers; similarly, the same individual will be observed by multiple mobile sensors in monitoring wild animals. Aggregates of interest on such distributed environments must be resilient to duplicate observations. We study such duplicate-resilient aggregates that measure the extent of the duplication―how many unique observations are there, how many observations are unique―as well as standard holistic aggregates such as quantiles and heavy hitters over the unique items. We present accuracy guaranteed, highly communication-efficient algorithms for these aggregates that work within the time and space constraints of high speed streams. We also present results of a detailed experimental study on both real-life and synthetic data.", "title": "" }, { "docid": "fa086058ad67602b9b4429f950e70c0f", "text": "The Telecare Medicine Information System (TMIS) has brought us a lot of conveniences. However, it may also reveal patients’ privacies and other important information. So the security of TMIS can be paid much attention to, in which identity authentication plays a very important role in protecting TMIS from being illegally used. To improve the situation, TMIS needs a more secure and more efficient authentication scheme. Recently, Yan and Li et al. have proposed a secure authentication scheme for the TMIS based on biometrics, claiming that it can withstand various attacks. In this paper, we present several security problems in their scheme as follows: (a) it cannot really achieve three-factor authentication; (b) it has design flaws at the password change phase; (c) users’ biometric may be locked out; (d) it fails to achieve users’ anonymous identity. To solve these problems, a new scheme using the theory of Secure Sketch is proposed. The thorough analysis shows that our scheme can provide a stronger security than Yan-Li’s protocol, despite the little higher computation cost at client. What’s more, the proposed scheme not only can achieve anonymity preserving but also can achieve session key agreement.", "title": "" }, { "docid": "4e685637bb976716b335ac2f52f03782", "text": "Breast Cancer is becoming a leading cause of death among women in the whole world; meanwhile, it is confirmed that the early detection and accurate diagnosis of this disease can ensure a long survival of the patients. This paper work presents a disease status prediction employing a hybrid methodology to forecast the changes and its consequence that is crucial for lethal infections. To alarm the severity of the diseases, our strategy consists of two main parts: 1. Information Treatment and Option Extraction, and 2. Decision Tree-Support Vector Machine (DT-SVM) Hybrid Model for predictions. We analyse the breast Cancer data available from the Wisconsin dataset from UCI machine learning with the aim of developing accurate prediction models for breast cancer using data mining techniques. In this experiment, we compare three classifications techniques in Weka software and comparison results show that DTSVM has higher prediction accuracy than Instance-based learning (IBL), Sequential Minimal Optimization (SMO) and Naïve based classifiers. Index Terms breast cancer; classification; Decision TreeSupport Vector Machine, Naïve Bayes, Instance-based learning, Sequential Minimal Optimization, and weka;", "title": "" }, { "docid": "c84a0f630b4fb2e547451d904e1c63a5", "text": "Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on “informative” examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the persample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally, on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.", "title": "" }, { "docid": "5447d3fe8ed886a8792a3d8d504eaf44", "text": "Glucose-responsive delivery of insulin mimicking the function of pancreatic β-cells to achieve meticulous control of blood glucose (BG) would revolutionize diabetes care. Here the authors report the development of a new glucose-responsive insulin delivery system based on the potential interaction between the glucose derivative-modified insulin (Glc-Insulin) and glucose transporters on erythrocytes (or red blood cells, RBCs) membrane. After being conjugated with the glucosamine, insulin can efficiently bind to RBC membranes. The binding is reversible in the setting of hyperglycemia, resulting in fast release of insulin and subsequent drop of BG level in vivo. The delivery vehicle can be further simplified utilizing injectable polymeric nanocarriers coated with RBC membrane and loaded with Glc-Insulin. The described work is the first demonstration of utilizing RBC membrane to achieve smart insulin delivery with fast responsiveness.", "title": "" }, { "docid": "3deb967a4e683b4a38b9143b105a5f2a", "text": "BACKGROUND\nThe Brief Obsessive Compulsive Scale (BOCS), derived from the Yale-Brown Obsessive-Compulsive Scale (Y-BOCS) and the children's version (CY-BOCS), is a short self-report tool used to aid in the assessment of obsessive-compulsive symptoms and diagnosis of obsessive-compulsive disorder (OCD). It is widely used throughout child, adolescent and adult psychiatry settings in Sweden but has not been validated up to date.\n\n\nAIM\nThe aim of the current study was to examine the psychometric properties of the BOCS amongst a psychiatric outpatient population.\n\n\nMETHOD\nThe BOCS consists of a 15-item Symptom Checklist including three items (hoarding, dysmorphophobia and self-harm) related to the DSM-5 category \"Obsessive-compulsive related disorders\", accompanied by a single six-item Severity Scale for obsessions and compulsions combined. It encompasses the revisions made in the Y-BOCS-II severity scale by including obsessive-compulsive free intervals, extent of avoidance and excluding the resistance item. 402 adult psychiatric outpatients with OCD, attention-deficit/hyperactivity disorder, autism spectrum disorder and other psychiatric disorders completed the BOCS.\n\n\nRESULTS\nPrincipal component factor analysis produced five subscales titled \"Symmetry\", \"Forbidden thoughts\", \"Contamination\", \"Magical thoughts\" and \"Dysmorphic thoughts\". The OCD group scored higher than the other diagnostic groups in all subscales (P < 0.001). Sensitivities, specificities and internal consistency for both the Symptom Checklist and the Severity Scale emerged high (Symptom Checklist: sensitivity = 85%, specificities = 62-70% Cronbach's α = 0.81; Severity Scale: sensitivity = 72%, specificities = 75-84%, Cronbach's α = 0.94).\n\n\nCONCLUSIONS\nThe BOCS has the ability to discriminate OCD from other non-OCD related psychiatric disorders. The current study provides strong support for the utility of the BOCS in the assessment of obsessive-compulsive symptoms in clinical psychiatry.", "title": "" } ]
scidocsrr
16b1156b36b37c3a445abab6aa7394a9
A 4 DOF exoskeleton robot with a novel shoulder joint mechanism
[ { "docid": "6bd3614d830cbef03c9567bf096e417a", "text": "Rehabilitation robots start to become an important tool in stroke rehabilitation. Compared to manual arm training, robot-supported training can be more intensive, of longer duration, repetitive and task-oriented. Therefore, these devices have the potential to improve the rehabilitation process in stroke patients. While in the past, most groups have been working with endeffector-based robots, exoskeleton robots become more and more important, mainly because they offer a better guidance of the single human joints, especially during movements with large ranges. Regarding the upper extremities, the shoulder is the most complex human joint and its actuation is, therefore, challenging. This paper deals with shoulder actuation principles for exoskeleton robots. First, a quantitative analysis of the human shoulder movement is presented. Based on that analysis two shoulder actuation principles that provide motion of the center of the glenohumeral joint are presented and evaluated.", "title": "" } ]
[ { "docid": "7ec12c0bf639c76393954baae196a941", "text": "Honeynets have now become a standard part of security measures within the organization. Their purpose is to protect critical information systems and information; this is complemented by acquisition of information about the network threats, attackers and attacks. It is very important to consider issues affecting the deployment and usage of the honeypots and honeynets. This paper discusses the legal issues of honeynets considering their generations. Paper focuses on legal issues of core elements of honeynets, especially data control, data capture and data collection. Paper also draws attention on the issues pertaining to privacy and liability. The analysis of legal issues is based on EU law and it is supplemented by a review of the research literature, related to legal aspects of honeypots and honeynets.", "title": "" }, { "docid": "f2edf7cc3671b38ae5f597e840eda3a2", "text": "This paper describes the process of creating a design pattern management interface for a collection of mobile design patterns. The need to communicate how patterns are interrelated and work together to create solutions motivated the creation of this interface. Currently, most design pattern collections are presented in alphabetical lists. The Oracle Mobile User Experience team approach is to communicate relationships visually by highlighting and connecting related patterns. Before the team designed the interface, we first analyzed common relationships between patterns and created a pattern language map. Next, we organized the patterns into conceptual design categories. Last, we designed a pattern management interface that enables users to browse patterns and visualize their relationships.", "title": "" }, { "docid": "445b3f542e785425cd284ad556ef825a", "text": "Despite the success of neural networks (NNs), there is still a concern among many over their “black box” nature. Why do they work? Yes, we have Universal Approximation Theorems, but these concern statistical consistency, a very weak property, not enough to explain the exceptionally strong performance reports of the method. Here we present a simple analytic argument that NNs are in fact essentially polynomial regression models, with the effective degree of the polynomial growing at each hidden layer. This view will have various implications for NNs, e.g. providing an explanation for why convergence problems arise in NNs, and it gives rough guidance on avoiding overfitting. In addition, we use this phenomenon to predict and confirm a multicollinearity property of NNs not previously reported in the literature. Most importantly, given this loose correspondence, one may choose to routinely use polynomial models instead of NNs, thus avoiding some major problems of the latter, such as having to set many tuning parameters and dealing with convergence issues. We present a number of empirical results; in each case, the accuracy of the polynomial approach matches or exceeds that of NN approaches. A many-featured, open-source software package, polyreg, is available. 1 ar X iv :1 80 6. 06 85 0v 2 [ cs .L G ] 2 9 Ju n 20 18 1 The Mystery of NNs Neural networks (NNs), especially in the currently popular form of many-layered deep learning networks (DNNs), have become many analysts’ go-to method for predictive analytics. Indeed, in the popular press, the term artificial intelligence has become virtually synonymous with NNs.1 Yet there is a feeling among many in the community that NNs are “black boxes”; just what is going on inside? Various explanations have been offered for the success of NNs, a prime example being [Shwartz-Ziv and Tishby(2017)]. However, the present paper will present significant new insights. 2 Contributions of This Paper The contribution of the present work will be as follows:2 (a) We will show that, at each layer of an NY, there is a rough correspondence to some fitted ordinary parametric polynomial regression (PR) model; in essence, NNs are a form of PR. We refer to this loose correspondence here as NNAEPR, Neural Nets Are Essentially Polynomial Models. (b) A very important aspect of NNAEPR is that the degree of the approximating polynomial increases with each hidden layer. In other words, our findings should not be interpreted as merely saying that the end result of an NN can be approximated by some polynomial. (c) We exploit NNAEPR to learn about general properties of NNs via our knowledge of the properties of PR. This will turn out to provide new insights into aspects such as the numbers of hidden layers and numbers of units per layer, as well as how convergence problems arise. For example, we use NNAEPR to predict and confirm a multicollinearity property of NNs not previous reported in the literature. (d) Property (a) suggests that in many applications, one might simply fit a polynomial model in the first place, bypassing NNs. This would have the advantage of avoiding the problems of choosing tuning parameters (the polynomial approach has just one, the degree), nonconvergence and so on. 1There are many different variants of NNs, but for the purposes of this paper, we can consider them as a group. 2 Author listing is alphabetical by surname. XC wrote the entire core code for the polyreg package; NM conceived of the main ideas underlying the work, developed the informal mathematical material and wrote support code; BK assembled the brain and kidney cancer data, wrote some of the support code, and provided domain expertise guidance for genetics applications; PM wrote extensive support code, including extending his kerasformula package, and provided specialized expertise on NNs. All authors conducted data experiments.", "title": "" }, { "docid": "0a31ab53b887cf231d7ca1a286763e5f", "text": "Humans acquire their most basic physical concepts early in development, but continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical theories across multiple timescales and levels of abstraction. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model and human learners on a challenging task of inferring novel physical laws in microworlds given short movies. People are generally able to perform this task and behave in line with model predictions. Yet they also make systematic errors suggestive of how a top-down Bayesian approach to learning might be complemented by a more bottomup feature-based approximate inference scheme, to best explain theory learning at an algorithmic level.", "title": "" }, { "docid": "50e081b178a1a308c61aae4a29789816", "text": "The ability to engineer enzymes and other proteins to any desired stability would have wide-ranging applications. Here, we demonstrate that computational design of a library with chemically diverse stabilizing mutations allows the engineering of drastically stabilized and fully functional variants of the mesostable enzyme limonene epoxide hydrolase. First, point mutations were selected if they significantly improved the predicted free energy of protein folding. Disulfide bonds were designed using sampling of backbone conformational space, which tripled the number of experimentally stabilizing disulfide bridges. Next, orthogonal in silico screening steps were used to remove chemically unreasonable mutations and mutations that are predicted to increase protein flexibility. The resulting library of 64 variants was experimentally screened, which revealed 21 (pairs of) stabilizing mutations located both in relatively rigid and in flexible areas of the enzyme. Finally, combining 10-12 of these confirmed mutations resulted in multi-site mutants with an increase in apparent melting temperature from 50 to 85°C, enhanced catalytic activity, preserved regioselectivity and a >250-fold longer half-life. The developed Framework for Rapid Enzyme Stabilization by Computational libraries (FRESCO) requires far less screening than conventional directed evolution.", "title": "" }, { "docid": "3e570e415690daf143ea30a8554b0ac8", "text": "Innovative technology on intelligent processes for smart home applications that utilize Internet of Things (IoT) is mainly limited and dispersed. The available trends and gaps were investigated in this study to provide valued visions for technical environments and researchers. Thus, a survey was conducted to create a coherent taxonomy on the research landscape. An extensive search was conducted for articles on (a) smart homes, (b) IoT and (c) applications. Three databases, namely, IEEE Explore, ScienceDirect and Web of Science, were used in the article search. These databases comprised comprehensive literature that concentrate on IoT-based smart home applications. Subsequently, filtering process was achieved on the basis of intelligent processes. The final classification scheme outcome of the dataset contained 40 articles that were classified into four classes. The first class includes the knowledge engineering process that examines data representation to identify the means of accomplishing a task for IoT applications and their utilisation in smart homes. The second class includes papers on the detection process that uses artificial intelligence (AI) techniques to capture the possible changes in IoT-based smart home applications. The third class comprises the analytical process that refers to the use of AI techniques to understand the underlying problems in smart homes by inferring new knowledge and suggesting appropriate solutions for the problem. The fourth class comprises the control process that describes the process of measuring and instructing the performance of IoT-based smart home applications against the specifications with the involvement of intelligent techniques. The basic features of this evolving approach were then identified in the aspects of motivation of intelligent process utilisation for IoT-based smart home applications and open-issue restriction utilisation. The recommendations for the approval and utilisation of intelligent process for IoT-based smart home applications were also determined from the literature.", "title": "" }, { "docid": "c93d1536c651ab80446a683482444890", "text": "We present a frequency-modulated continuous-wave secondary radar concept to estimate the offset in time and in frequency of two wireless units and to measure their distance relative to each other. By evaluating the Doppler frequency shift of the radar signals, the relative velocity of the mobile units is measured as well. The distance can be measured with a standard deviation as low as 1 cm. However, especially in indoor environments, the accuracy of the system can be degraded by multipath transmissions. Therefore, we show an extension of the algorithm to cope with multipath propagation. We also present the hardware setup of the measurement system. The system is tested in various environments. The results prove the excellent performance and outstanding reliability of the algorithms presented.", "title": "" }, { "docid": "2f7b81ddd5790eacb03ec2a226614280", "text": "Literature on supply chain management (SCM) covers several disciplines and is growing rapidly. This paper firstly aims at extracting the essence of SCM and advanced planning in the form of two conceptual frameworks: The house of SCM and the supply chain planning matrix. As an illustration, contributions to this feature issue will then be assigned to the building blocks of the house of SCM or to the modules covering the supply chain planning matrix. Secondly, focusing on software for advanced planning, we outline its main shortcomings and present latest research results for their resolution. 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "71428f1d968a25eb7df33f55557eb424", "text": "BACKGROUND\nThe 'Choose and Book' system provides an online booking service which primary care professionals can book in real time or soon after a patient's consultation. It aims to offer patients choice and improve outpatient clinic attendance rates.\n\n\nOBJECTIVE\nAn audit comparing attendance rates of new patients booked into the Audiological Medicine Clinic using the 'Choose and Book' system with that of those whose bookings were made through the traditional booking system.\n\n\nMETHODS\nData accrued between 1 April 2008 and 31 October 2008 were retrospectively analysed for new patient attendance at the department, and the age and sex of the patients, method of appointment booking used and attendance record were collected. Patients were grouped according to booking system used - 'Choose and Book' or the traditional system. The mean ages of the groups were compared by a t test. The standard error of the difference between proportions was used to compare the data from the two groups. A P value of < or = 0.05 was considered to be significant.\n\n\nRESULTS\n'Choose and Book' patients had a significantly better rate of attendance than traditional appointment patients, P < 0.01 (95% CI 4.3, 20.5%). There was no significant difference between the two groups in terms of sex, P > 0.1 (95% CI-3.0, 16.2%). The 'Choose and Book' patients, however, were significantly older than the traditional appointment patients, P < 0.001 (95% CI 4.35, 12.95%).\n\n\nCONCLUSION\nThis audit suggests that when primary care agents book outpatient clinic appointments online it improves outpatient attendance.", "title": "" }, { "docid": "3a69d6ef79482d26aee487a964ff797f", "text": "The FPGA compilation process (synthesis, map, placement, routing) is a time-consuming process that limits designer productivity. Compilation time can be reduced by using pre-compiled circuit blocks (hard macros). Hard macros consist of previously synthesized, mapped, placed and routed circuitry that can be relatively placed with short tool runtimes and that make it possible to reuse previous computational effort. Two experiments were performed to demonstrate feasibility that hard macros can reduce compilation time. These experiments demonstrated that an augmented Xilinx flow designed specifically to support hard macros can reduce overall compilation time by 3x. Though the process of incorporating hard macros in designs is currently manual and error-prone, it can be automated to create compilation flows with much lower compilation time.", "title": "" }, { "docid": "5abe5696969eca4d19a55e3492af03a8", "text": "In the era of big data, analyzing and extracting knowledge from large-scale data sets is a very interesting and challenging task. The application of standard data mining tools in such data sets is not straightforward. Hence, a new class of scalable mining method that embraces the huge storage and processing capacity of cloud platforms is required. In this work, we propose a novel distributed partitioning methodology for prototype reduction techniques in nearest neighbor classification. These methods aim at representing original training data sets as a reduced number of instances. Their main purposes are to speed up the classification process and reduce the storage requirements and sensitivity to noise of the nearest neighbor rule. However, the standard prototype reduction methods cannot cope with very large data sets. To overcome this limitation, we develop a MapReduce-based framework to distribute the functioning of these algorithms through a cluster of computing elements, proposing several algorithmic strategies to integrate multiple partial solutions (reduced sets of prototypes) into a single one. The proposed model enables prototype reduction algorithms to be applied over big data classification problems without significant accuracy loss. We test the speeding up capabilities of our model with data sets up to 5.7 millions of Email addresses: triguero@decsai.ugr.es (Isaac Triguero), dperalta@decsai.ugr.es (Daniel Peralta), jaume.bacardit@newcastle.ac.uk (Jaume Bacardit), sglopez@ujaen.es (Salvador Garćıa), herrera@decsai.ugr.es (Francisco Herrera) Preprint submitted to Neurocomputing March 3, 2014 instances. The results show that this model is a suitable tool to enhance the performance of the nearest neighbor classifier with big data.", "title": "" }, { "docid": "f6c3124f3824bcc836db7eae1b926d65", "text": "Cloud balancing provides an organization with the ability to distribute application requests across any number of application deployments located in different data centers and through Cloud-computing providers. In this paper, we propose a load balancing methodMinsd (Minimize standard deviation of Cloud load method) and apply it on three levels control: PEs (Processing Elements), Hosts and Data Centers. Simulations on CloudSim are used to check its performance and its influence on makespan, communication overhead and throughput. A true log of a cluster also is used to test our method. Results indicate that our method not only gives good Cloud balancing but also ensures reducing makespan and communication overhead and enhancing throughput of the whole the system.", "title": "" }, { "docid": "0ff9e3b699e5cb5c098cdcc7d7ed78b6", "text": "Malwares are becoming persistent by creating fulledged variants of the same or different family. Malwares belonging to same family share same characteristics in their functionality of spreading infections into the victim computer. These similar characteristics among malware families can be taken as a measure for creating a solution that can help in the detection of the malware belonging to particular family. In our approach we have taken the advantage of detecting these malware families by creating the database of these characteristics in the form of n-grams of API sequences. We use various similarity score methods and also extract multiple API sequences to analyze malware effectively.", "title": "" }, { "docid": "73bbb7122b588761f1bf7b711f21a701", "text": "This research attempts to find a new closed-form solution of toroid and overlapping windings for axial flux permanent magnet machines. The proposed solution includes analytical derivations for winding lengths, resistances, and inductances as functions of fundamental airgap flux density and inner-to-outer diameter ratio. Furthermore, phase back-EMFs, phase terminal voltages, and efficiencies are calculated and compared for both winding types. Finite element analysis is used to validate the accuracy of the proposed analytical calculations. The proposed solution should assist machine designers to ascertain benefits and limitations of toroid and overlapping winding types as well as to get faster results.", "title": "" }, { "docid": "6403b543937832f641d98b9212d2428e", "text": "Information edge and 3 millennium predisposed so many of revolutions. Business organization with emphasize on information systems is try to gathering desirable information for decision making. Because of comprehensive change in business background and emerge of computers and internet, the business structure and needed information had change, the competitiveness as a major factor for life of organizations in information edge is preyed of information technology challenges. In this article we have reviewed in the literature of information systems and discussed the concepts of information system as a strategic tool.", "title": "" }, { "docid": "a651ae33adce719033dad26b641e6086", "text": "Knowledge base(KB) plays an important role in artificial intelligence. Much effort has been taken to both manually and automatically construct web-scale knowledge bases. Comparing with manually constructed KBs, automatically constructed KB is broader but with more noises. In this paper, we study the problem of improving the quality for automatically constructed web-scale knowledge bases, in particular, lexical taxonomies of isA relationships. We find that these taxonomies usually contain cycles, which are often introduced by incorrect isA relations. Inspired by this observation, we introduce two kinds of models to detect incorrect isA relations from cycles. The first one eliminates cycles by extracting directed acyclic graphs, and the other one eliminates cycles by grouping nodes into different levels. We implement our models on Probase, a state-of-the-art, automatically constructed, web-scale taxonomy. After processing tens of millions of relations, our models eliminate 74 thousand wrong relations with 91% accuracy.", "title": "" }, { "docid": "6be37d8e76343b0955c30afe1ebf643d", "text": "Session: Feb. 15, 2016, 2‐3:30 pm Chair: Xiaobai Liu, San Diego State University (SDSU) Oral Presentations 920: On the Depth of Deep Neural Networks: A Theoretical View. Shizhao Sun, Wei Chen, Liwei Wang, Xiaoguang Liu, Tie‐Yan Liu 1229: How Important Is Weight Symmetry in Backpropagation? Qianli Liao, Joel Z. Leibo, Tomaso Poggio 1769: Deep Learning with S‐shaped Rectified Linear Activation Units. Xiaojie Jin, Chunyan Xu, Jiashi Feng, Yunchao Wei, Junjun Xiong, Shuicheng Yan 1142: Learning Step Size Controllers for Robust Neural Network Training. Christian Daniel, Jonathan Taylor, Sebastian Nowozin", "title": "" }, { "docid": "8a695d5913c3b87fb21864c0bdd3d522", "text": "Environmental topics have gained much consideration in corporate green operations. Globalization, stakeholder pressures, and stricter environmental regulations have made organizations develop environmental practices. Thus, green supply chain management (GSCM) is now a proactive approach for organizations to enhance their environmental performance and achieve competitive advantages. This study pioneers using the decision-making trial and evaluation laboratory (DEMATEL) method with intuitionistic fuzzy sets to handle the important and causal relationships between GSCM practices and performances. DEMATEL evaluates GSCM practices to find the main practices to improve both environmental and economic performances. This study uses intuitionistic fuzzy set theory to handle the linguistic imprecision and the ambiguity of human being’s judgment. A case study from the automotive industry is presented to evaluate the efficiency of the proposed method. The results reveal ‘‘internal management support’’, ‘‘green purchasing’’ and ‘‘ISO 14001 certification’’ are the most significant GSCM practices. The practical results of this study offer useful insights for managers to become more environmentally responsible, while improving their economic and environmental performance goals. Further, a sensitivity analysis of results, managerial implications, conclusions, limitations and future research opportunities are provided. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "73aa720bebc5f2fa1930930fb4185490", "text": "A CMOS OTA-C notch filter for 50Hz interference was presented in this paper. The OTAs were working in weak inversion region in order to achieve ultra low transconductance and power consumptions. The circuits were designed using SMIC mixed-signal 0.18nm 1P6M process. The post-annotated simulation indicated that an attenuation of 47.2dB for power line interference and a 120pW consumption. The design achieved a dynamic range of 75.8dB and a THD of 0.1%, whilst the input signal was a 1 Hz 20mVpp sine wave.", "title": "" }, { "docid": "3b4622a4ad745fc0ffb3b6268eb969fa", "text": "Eruptive syringomas: unresponsiveness to oral isotretinoin A 22-year-old man of Egyptian origin was referred to our department due to exacerbation of pruritic pre-existing papular dermatoses. The skin lesions had been present since childhood. The family history was negative for a similar condition. The patient complained of exacerbation of the pruritus during physical activity under a hot climate and had moderate to severe pruritus during his work. Physical examination revealed multiple reddish-brownish smooth-surfaced, symmetrically distributed papules 2–4 mm in diameter on the patient’s trunk, neck, axillae, and limbs (Fig. 1). The rest of the physical examination was unremarkable. The Darier sign was negative. A skin biopsy was obtained from a representative lesion on the trunk. Histopathologic examination revealed a small, wellcircumscribed neoplasm confined to the upper dermis, composed of small solid and ductal structures relatively evenly distributed in a sclerotic collagenous stroma. The solid elements were of various shapes (round, oval, curvilinear, “comma-like,” or “tadpole-like”) (Fig. 2). These microscopic features and the clinical presentation were consistent with the diagnosis of eruptive syringomas. Our patient was treated with a short course of oral antihistamines without any effect and subsequently with low-dose isotretinoin (10 mg/day) for 5 months. No improvement of the skin eruption was observed while cessation of the pruritus was accomplished. Syringoma is a common adnexal tumor with differentiation towards eccrine acrosyringium composed of small solid and ductal elements embedded in a sclerotic stroma and restricted as a rule to the upper to mid dermis, usually presenting clinically as multiple lesions on the lower eyelids and cheeks of adolescent females. A much less common variant is the eruptive or disseminated syringomas, which involve primarily young women. Eruptive syringomas are characterized by rapid development during a short period of time of hundreds of small (1–5 mm), ill-defined, smooth surfaced, skin-colored, pink, yellowish, or brownish papules typically involving the face, trunk, genitalia, pubic area, and extremities but can occur principally in any site where eccrine glands are found. The pathogenesis of eruptive syringoma remains unclear. Some authors have recently challenged the traditional notion that eruptive syringomas are neoplastic lesions. Chandler and Bosenberg presented evidence that eruptive syringomas result from autoimmune destruction of the acrosyringium and proposed the term autoimmune acrosyringitis with ductal cysts. Garrido-Ruiz et al. support the theory that eruptive syringomas may represent a hyperplastic response of the eccrine duct to an inflammatory reaction. In a recent systematic review by Williams and Shinkai the strongest association of syringomas was with Down’s syndrome (183 reported cases, 22.2%). Syringomas are also associated with diabetes mellitus (17 reported cases, 2.1%), Ehlers–Danlos", "title": "" } ]
scidocsrr
a4fb3fce4c87641b7373240d23aec96c
“When the going gets tough, who keeps going?” Depletion sensitivity moderates the ego-depletion effect
[ { "docid": "947720ca5d07b210f3d519c7e8e93081", "text": "Previous work has shown that acts of self-regulation appear to deplete a psychological resource, resulting in poorer self-regulation subsequently. Four experiments using assorted manipulations and measures found that positive mood or emotion can counteract ego depletion. After an initial act of self-regulation, participants who watched a comedy video or received a surprise gift self-regulated on various tasks as well as non-depleted participants and signiWcantly better than participants who experienced a sad mood induction, a neutral mood stimulus, or a brief rest period. © 2006 Elsevier Inc. All rights reserved.", "title": "" } ]
[ { "docid": "2f83b2ef8f71c56069304b0962074edc", "text": "Abstract: Printed antennas are becoming one of the most popular designs in personal wireless communications systems. In this paper, the design of a novel tapered meander line antenna is presented. The design analysis and characterization of the antenna is performed using the finite difference time domain technique and experimental verifications are performed to ensure the effectiveness of the numerical model. The new design features an operating frequency of 2.55 GHz with a 230 MHz bandwidth, which supports future generations of mobile communication systems.", "title": "" }, { "docid": "b4e1fdeb6d467eddfea074b802558fb8", "text": "This paper proposes a novel and more accurate iris segmentation framework to automatically segment iris region from the face images acquired with relaxed imaging under visible or near-infrared illumination, which provides strong feasibility for applications in surveillance, forensics and the search for missing children, etc. The proposed framework is built on a novel total-variation based formulation which uses l1 norm regularization to robustly suppress noisy texture pixels for the accurate iris localization. A series of novel and robust post processing operations are introduced to more accurately localize the limbic boundaries. Our experimental results on three publicly available databases, i.e., FRGC, UBIRIS.v2 and CASIA.v4-distance, achieve significant performance improvement in terms of iris segmentation accuracy over the state-of-the-art approaches in the literature. Besides, we have shown that using iris masks generated from the proposed approach helps to improve iris recognition performance as well. Unlike prior work, all the implementations in this paper are made publicly available to further advance research and applications in biometrics at-d-distance.", "title": "" }, { "docid": "bc5a3cd619be11132ea39907f732bf4c", "text": "A burgeoning interest in the intersection of neuroscience and architecture promises to offer biologically inspired insights into the design of spaces. The goal of such interdisciplinary approaches to architecture is to motivate construction of environments that would contribute to peoples' flourishing in behavior, health, and well-being. We suggest that this nascent field of neuroarchitecture is at a pivotal point in which neuroscience and architecture are poised to extend to a neuroscience of architecture. In such a research program, architectural experiences themselves are the target of neuroscientific inquiry. Here, we draw lessons from recent developments in neuroaesthetics to suggest how neuroarchitecture might mature into an experimental science. We review the extant literature and offer an initial framework from which to contextualize such research. Finally, we outline theoretical and technical challenges that lie ahead.", "title": "" }, { "docid": "9870d5f4c2fe3de625811396cf52e8ce", "text": "E very chemist, material scientist, physicist, engineer, and commercial enterprise involved in the synthesis and/or production of engineered nanomaterials (ENM) or nanoenabled products aspires to develop safe materials. Nanotechnology environmental health and safety (nanoEHS) is a research discipline that involves the study of the possible adverse health and biological effects that nanomaterials may have on humans and environmental organisms and ecosystems. Recent nanoEHS research has provided a body of experimental evidence indicating the possibility of hazardous outcomes as a result of the interactions of unique ENM physicochemical properties with similar scale processes occurring at a wide range of nano/bio interfaces, including at the biomolecular, cellular, subcellular, organ, systemic, whole organism, or ecosystem levels. This projected hazard and risk potential warrants rigorous attention to safety assessment, safe use, safe implementation , benign design, regulatory oversight, governance, and public awareness to address the possibility and prevention of nanotoxicity, now and at any time in the future. 1 Thus, we must understand the properties of the ENMs that are responsible for the toxicological response, so that we can re-engineer their physicochemical characteristics for risk prevention and safer ENM design. 2 However, in spite of widespread use, no human toxicological disease or major environmental impact has been reported for ENMs. Thus, while \" nanotoxicology \" is a thriving subdiscipline of nanoEHS, the use of the \" root \" word toxicology may elicit a feeling that nanomaterials are inherently toxic despite the fact that toxicity has not thus far been established in real life. As a community, we may want to rename this subdiscipline as \" nanosafety \" since the objective is to use toxicology information to guide the design of safer nanomaterials for use in medicine, biology, electronics, lighting systems, and other areas. At ACS Nano, we publish articles and forward-looking Perspectives and reviews that determine and establish ENM physicochemical properties, structureÀactivity (SA) relationships , catalytic effects at the nano/bio interface, mechanistic injury responses, in vitro to in vivo prediction making, safer-by-design strategies, actionable screening and detection methods, hazard and risk ranking, fate and transport, ENM categorization, theory and modeling, societal implications, and regulatory/governance decisions. 3 Context is important in the immediate and long-range impact of this research, as we are interested in realistic nanoEHS exposure scenarios conducted with systematic variation of ENM physicochem-ical properties rather than investigations of a single or a limited number of materials in isolated in vitro studies that only …", "title": "" }, { "docid": "4615b252d65a56365ffe9c09d6c8cdd7", "text": "Males and females score differently on some personality traits, but the underlying etiology of these differences is not well understood. This study examined genetic, environmental, and prenatal hormonal influences on individual differences in personality masculinity-femininity (M-F). We used Big-Five personality inventory data of 9,520 Swedish twins (aged 27 to 54) to create a bipolar M-F personality scale. Using biometrical twin modeling, we estimated the influence of genetic and environmental factors on individual differences in a M-F personality score. Furthermore, we tested whether prenatal hormone transfer may influence individuals' M-F scores by comparing the scores of twins with a same-sex versus those with an opposite-sex co-twin. On average, males scored 1.09 standard deviations higher than females on the created M-F scale. Around a third of the variation in M-F personality score was attributable to genetic factors, while family environmental factors had no influence. Males and females from opposite-sex pairs scored significantly more masculine (both approximately 0.1 SD) than those from same-sex pairs. In conclusion, genetic influences explain part of the individual differences in personality M-F, and hormone transfer from the male to the female twin during pregnancy may increase the level of masculinization in females. Additional well-powered studies are needed to clarify this association and determine the underlying mechanisms in both sexes.", "title": "" }, { "docid": "5364dd1ec4afce5ee01ca8bc0e6d9aed", "text": "In this paper we present a fuzzy version of SHOIN (D), the corresponding Description Logic of the ontology description language OWL DL. We show that the representation and reasoning capabilities of fuzzy SHOIN (D) go clearly beyond classical SHOIN (D). Interesting features are: (i) concept constructors are based on t-norm, t-conorm, negation and implication; (ii) concrete domains are fuzzy sets; (iii) fuzzy modifiers are allowed; and (iv) entailment and subsumption relationships may hold to some degree in the unit interval [0, 1].", "title": "" }, { "docid": "6a9e30fd08b568ef6607158cab4f82b2", "text": "Expertise with unfamiliar objects (‘greebles’) recruits face-selective areas in the fusiform gyrus (FFA) and occipital lobe (OFA). Here we extend this finding to other homogeneous categories. Bird and car experts were tested with functional magnetic resonance imaging during tasks with faces, familiar objects, cars and birds. Homogeneous categories activated the FFA more than familiar objects. Moreover, the right FFA and OFA showed significant expertise effects. An independent behavioral test of expertise predicted relative activation in the right FFA for birds versus cars within each group. The results suggest that level of categorization and expertise, rather than superficial properties of objects, determine the specialization of the FFA.", "title": "" }, { "docid": "0e0c1004ad3bf29c5a855531a5185991", "text": "At Facebook, our data systems process huge volumes of data, ranging from hundreds of terabytes in memory to hundreds of petabytes on disk. We categorize our systems as “small data” or “big data” based on the type of queries they run. Small data refers to OLTP-like queries that process and retrieve a small amount of data, for example, the 1000s of objects necessary to render Facebook's personalized News Feed for each person. These objects are requested by their ids; indexes limit the amount of data accessed during a single query, regardless of the total volume of data. Big data refers to queries that process large amounts of data, usually for analysis: trouble-shooting, identifying trends, and making decisions. Big data stores are the workhorses for data analysis at Facebook. They grow by millions of events (inserts) per second and process tens of petabytes and hundreds of thousands of queries per day. In this tutorial, we will describe our data systems and the current challenges we face. We will lead a discussion on these challenges, approaches to solve them, and potential pitfalls. We hope to stimulate interest in solving these problems in the research community.", "title": "" }, { "docid": "f58d69de4b5bcc4100a3bfe3426fa81f", "text": "This study evaluated the factor structure of the Rosenberg Self-Esteem Scale (RSES) with a diverse sample of 1,248 European American, Latino, Armenian, and Iranian adolescents. Adolescents completed the 10-item RSES during school as part of a larger study on parental influences and academic outcomes. Findings suggested that method effects in the RSES are more strongly associated with negatively worded items across three diverse groups but also more pronounced among ethnic minority adolescents. Findings also suggested that accounting for method effects is necessary to avoid biased conclusions regarding cultural differences in selfesteem and how predictors are related to the RSES. Moreover, the two RSES factors (positive self-esteem and negative self-esteem) were differentially predicted by parenting behaviors and academic motivation. Substantive and methodological implications of these findings for crosscultural research on adolescent self-esteem are discussed.", "title": "" }, { "docid": "25a725867b066cbf55d9281dce59aaf6", "text": "Achievement systems are reward structures providing additional goals for players, and thus extending the play time of videogames. In this paper, we explore how applications other than games could benefit from achievement systems, and how users perceive this additional content in a service. For this purpose, we added an achievement system to a geo-tagged photo sharing service called Nokia Image Space. The results suggest that there is some potential in achievement systems outside the game domain. The achievements triggered some friendly competition and comparison between users. However, many users were not convinced, expressing concerns about the achievements motivating undesirable usage patterns. Therefore, an achievement system poses certain design considerations when applied in nongame software.", "title": "" }, { "docid": "a82aac21da1e5c10b2118353fde4b510", "text": "OBJECTIVE\nReturning children to their biological families after placement in foster care (ie, reunification) has been prioritized with legislation. Comprehensive studies of child behavioral health functioning after reunification, however, have not been conducted. This study examined outcomes for youth who were reunified after placement in foster care as compared with youth who did not reunify.\n\n\nDESIGN\nProspective cohort.\n\n\nSETTING\nChildren who entered foster care in San Diego, California, and who remained in foster care for at least 5 months. Participants. A cohort of 149 ethnically diverse youth, 7 to 12 years old, who entered foster care between May 1990, and October 1991. Seventy-five percent of those interviewed at Time 1 were interviewed at Time 2 (6 years later).\n\n\nOUTCOME MEASURES\n1) Risk behaviors: delinquent, sexual, self-destructive, substance use, and total risk behaviors; 2) Life-course outcomes: pregnancy, tickets/arrests, suspensions, dropping out of school, and grades; 3) Current symptomatology: externalizing, internalizing, total behavior problems, and total competence.\n\n\nRESULTS\nCompared with youth who were not reunified, reunified youth showed more self-destructive behavior (0.15 vs -0.11), substance use (0.16 vs -0.11), and total risk behavior problem standardized scores (0.12 vs -0.09). Reunified youth were more likely to have received a ticket or have been arrested (49.2% vs 30.2%), to have dropped out of school (20.6% vs 9.4%), and to have received lower grades (6.5 vs 7.4). Reunified youth reported more current problems in internalizing behaviors (56.6 vs 53.0), and total behavior problems (59.5 vs 55.7), and lower total competence (41.1 vs 45.0). There were no statistically significant differences between the groups on delinquency, sexual behaviors, pregnancy, suspensions, or externalizing behaviors. Reunification status was a significant predictor of negative outcomes in 8 of the 9 regression equations after controlling for Time 1 behavior problems, age, and gender.\n\n\nCONCLUSIONS\nThese findings suggest that youth who reunify with their biological families after placement in foster care have more negative outcomes than youth who do not reunify. The implications of these findings for policy and practice are discussed.", "title": "" }, { "docid": "379df071aceaee1be2228070f0245257", "text": "This paper reports a SiC-based solid-state circuit breaker (SSCB) with an adjustable current-time (I-t) tripping profile for both ultrafast short circuit protection and overload protection. The tripping time ranges from 0.5 microsecond to 10 seconds for a fault current ranging from 0.8X to 10X of the nominal current. The I-t tripping profile, adjustable by choosing different resistance values in the analog control circuit, can help avoid nuisance tripping of the SSCB due to inrush transient current. The maximum thermal capability of the 1200V SiC JFET static switch in the SSCB is investigated to set a practical thermal limit for the I-t tripping profile. Furthermore, a low fault current ‘blind zone’ limitation of the prior SSCB design is discussed and a new circuit solution is proposed to operate the SSCB even under a low fault current condition. Both simulation and experimental results are reported.", "title": "" }, { "docid": "05a07644824dd85eb2251a642c506d18", "text": "BACKGROUND\nWe present a method utilizing Healthcare Cost and Utilization Project (HCUP) dataset for predicting disease risk of individuals based on their medical diagnosis history. The presented methodology may be incorporated in a variety of applications such as risk management, tailored health communication and decision support systems in healthcare.\n\n\nMETHODS\nWe employed the National Inpatient Sample (NIS) data, which is publicly available through Healthcare Cost and Utilization Project (HCUP), to train random forest classifiers for disease prediction. Since the HCUP data is highly imbalanced, we employed an ensemble learning approach based on repeated random sub-sampling. This technique divides the training data into multiple sub-samples, while ensuring that each sub-sample is fully balanced. We compared the performance of support vector machine (SVM), bagging, boosting and RF to predict the risk of eight chronic diseases.\n\n\nRESULTS\nWe predicted eight disease categories. Overall, the RF ensemble learning method outperformed SVM, bagging and boosting in terms of the area under the receiver operating characteristic (ROC) curve (AUC). In addition, RF has the advantage of computing the importance of each variable in the classification process.\n\n\nCONCLUSIONS\nIn combining repeated random sub-sampling with RF, we were able to overcome the class imbalance problem and achieve promising results. Using the national HCUP data set, we predicted eight disease categories with an average AUC of 88.79%.", "title": "" }, { "docid": "5b07c3e3a8f91884f00cf728a2ef8772", "text": "Human self-consciousness relies on the ability to distinguish between oneself and others. We sought to explore the neural correlates involved in self-other representations by investigating two critical processes: perspective taking and agency. Although recent research has shed light on the neural processes underlying these phenomena, little is known about how they overlap or interact at the neural level. In a two-factorial functional magnetic resonance imaging (fMRI) experiment, participants played a ball-tossing game with two virtual characters (avatars). During an active/agency (ACT) task, subjects threw a ball to one of the avatars by pressing a button. During a passive/nonagency (PAS) task, they indicated which of the other avatars threw the ball. Both tasks were performed from a first-person perspective (1PP), in which subjects interacted from their own perspective, and a third-person perspective (3PP), in which subjects interacted from the perspective of an avatar with another location in space. fMRI analyses revealed overlapping activity in medial prefrontal regions associated with representations of one's own perspective and actions (1PP and ACT), and overlapping activity in temporal-occipital, premotor, and inferior frontal, as well as posterior parietal regions associated with representation of others' perspectives and actions (3PP and PAS). These findings provide evidence for distinct neural substrates underlying representations of the self and others and provide support for the idea that the medial prefrontal cortex crucially contributes to a neural basis of the self. The lack of a statistically significant interaction suggests that perspective taking and agency represent independent constituents of self-consciousness.", "title": "" }, { "docid": "4746703f20b8fd902c451e658e44f49b", "text": "This paper describes the development of a Latvian speech-to-text (STT) system at LIMSI within the Quaero project. One of the aims of the speech processing activities in the Quaero project is to cover all official European languages. However, for some of the languages only very limited, if any, training resources are available via corpora agencies such as LDC and ELRA. The aim of this study was to show the way, taking Latvian as example, an STT system can be rapidly developed without any transcribed training data. Following the scheme proposed in this paper, the Latvian STT system was developed in about a month and obtained a word error rate of 20% on broadcast news and conversation data in the Quaero 2012 evaluation campaign.", "title": "" }, { "docid": "7b5462277dd7b048179ae0a7a86c8990", "text": "Attack graphs depict ways in which an adversary exploits system vulnerabilities to achieve a desired state. System administrators use attack graphs to determine how vulnerable their systems are and to determine what security measures to deploy to defend their systems. In this paper, we present details of an example to illustrate how we specify and analyze network attack models. We take these models as input to our attack graph tools to generate attack graphs automatically and to analyze system vulnerabilities. While we have published our generation and analysis algorithms in earlier work, the presentation of our example and toolkit is novel to this paper.", "title": "" }, { "docid": "ff40eca4b4a27573e102b40c9f70aea4", "text": "This paper is concerned with the question of how to online combine an ensemble of active learners so as to expedite the learning progress during a pool-based active learning session. We develop a powerful active learning master algorithm, based a known competitive algorithm for the multi-armed bandit problem and a novel semi-supervised performance evaluation statistic. Taking an ensemble containing two of the best known active learning algorithms and a new algorithm, the resulting new active learning master algorithm is empirically shown to consistently perform almost as well as and sometimes outperform the best algorithm in the ensemble on a range of classification problems.", "title": "" }, { "docid": "83ac82ef100fdf648a5214a50d163fe3", "text": "We consider the problem of multi-robot taskallocation when robots have to deal with uncertain utility estimates. Typically an allocation is performed to maximize expected utility; we consider a means for measuring the robustness of a given optimal allocation when robots have some measure of the uncertainty (e.g., a probability distribution, or moments of such distributions). We introduce a new O(n) algorithm, the Interval Hungarian algorithm, that extends the classic KuhnMunkres Hungarian algorithm to compute the maximum interval of deviation (for each entry in the assignment matrix) which will retain the same optimal assignment. This provides an efficient measurement of the tolerance of the allocation to the uncertainties, for both a specific interval and a set of interrelated intervals. We conduct experiments both in simulation and with physical robots to validate the approach and to gain insight into the effect of location uncertainty on allocations for multi-robot multi-target navigation tasks.", "title": "" }, { "docid": "4d9312d22dcc37933d0108fbfacd1c38", "text": "This study focuses on the use of different types of shear reinforcement in the reinforced concrete beams. Four different types of shear reinforcement are investigated; traditional stirrups, welded swimmer bars, bolted swimmer bars, and u-link bolted swimmer bars. Beam shear strength as well as beam deflection are the main two factors considered in this study. Shear failure in reinforced concrete beams is one of the most undesirable modes of failure due to its rapid progression. This sudden type of failure made it necessary to explore more effective ways to design these beams for shear. The reinforced concrete beams show different behavior at the failure stage in shear compare to the bending, which is considered to be unsafe mode of failure. The diagonal cracks that develop due to excess shear forces are considerably wider than the flexural cracks. The cost and safety of shear reinforcement in reinforced concrete beams led to the study of other alternatives. Swimmer bar system is a new type of shear reinforcement. It is a small inclined bars, with its both ends bent horizontally for a short distance and welded or bolted to both top and bottom flexural steel reinforcement. Regardless of the number of swimmer bars used in each inclined plane, the swimmer bars form plane-crack interceptor system instead of bar-crack interceptor system when stirrups are used. Several reinforced concrete beams were carefully prepared and tested in the lab. The results of these tests will be presented and discussed. The deflection of each beam is also measured at incrementally increased applied load.", "title": "" }, { "docid": "9adeff230535ea9b0cb8b8e245510e8f", "text": "Software-defined network (SDN) has become one of the most important architectures for the management of largescale complex networks, which may require repolicing or reconfigurations from time to time. SDN achieves easy repolicing by decoupling the control plane from data plane. Thus, the network routers/switches just simply forward packets by following the flow table rules set by the control plane. Currently, OpenFlow is the most popular SDN protocol/standard and has a set of design specifications. Although SDN/OpenFlow is a relatively new area, it has attracted much attention from both academia and industry. In this paper, we will conduct a comprehensive survey of the important topics in SDN/OpenFlow implementation, including the basic concept, applications, language abstraction, controller, virtualization, quality of service, security, and its integration with wireless and optical networks. We will compare the pros and cons of different schemes and discuss the future research trends in this exciting area. This survey can help both industry and academia R&D people to understand the latest progress of SDN/OpenFlow designs.", "title": "" } ]
scidocsrr
6c592dd9b0651393257612ea9cb49aa7
Syntactic entropy for main content extraction from web pages
[ { "docid": "6dcad40e2dfecb03c902695b63e69529", "text": "Most of today's web content is designed for human consumption, which makes it difficult for software tools to access them readily. Even web content that is automatically generated from back-end databases is usually presented without the original structural information. In this paper, we present an automated information extraction algorithm that can extract the relevant attribute-value pairs from product descriptions across different sites. A notion, called structural-semantic entropy, is used to locate the data of interest on web pages, which measures the density of occurrence of relevant information on the DOM tree representation of web pages. Our approach is less labor-intensive and insensitive to changes in web-page format. Experimental results on a large number of real-life web page collections are encouraging and confirm the feasibility of the approach, which has been successfully applied to detect false drug advertisements on the web due to its capacity in associating the attributes of records with their respective values.", "title": "" }, { "docid": "c6cf82e7ba24176c36cc3d2ca556532f", "text": "We present Content Extraction via Tag Ratios (CETR) - a method to extract content text from diverse webpages by using the HTML document's tag ratios. We describe how to compute tag ratios on a line-by-line basis and then cluster the resulting histogram into content and non-content areas. Initially, we find that the tag ratio histogram is not easily clustered because of its one-dimensionality; therefore we extend the original approach in order to model the data in two dimensions. Next, we present a tailored clustering technique which operates on the two-dimensional model, and then evaluate our approach against a large set of alternative methods using standard accuracy, precision and recall metrics on a large and varied Web corpus. Finally, we show that, in most cases, CETR achieves better content extraction performance than existing methods, especially across varying web domains, languages and styles.", "title": "" } ]
[ { "docid": "77f3dfeba56c3731fda1870ce48e1aca", "text": "The organicist view of society is updated by incorporating concepts from cybernetics, evolutionary theory, and complex adaptive systems. Global society can be seen as an autopoietic network of self-producing components, and therefore as a living system or ‘superorganism’. Miller's living systems theory suggests a list of functional components for society's metabolism and nervous system. Powers' perceptual control theory suggests a model for a distributed control system implemented through the market mechanism. An analysis of the evolution of complex, networked systems points to the general trends of increasing efficiency, differentiation and integration. In society these trends are realized as increasing productivity, decreasing friction, increasing division of labor and outsourcing, and increasing cooperativity, transnational mergers and global institutions. This is accompanied by increasing functional autonomy of individuals and organisations and the decline of hierarchies. The increasing complexity of interactions and instability of certain processes caused by reduced friction necessitate a strengthening of society's capacity for information processing and control, i.e. its nervous system. This is realized by the creation of an intelligent global computer network, capable of sensing, interpreting, learning, thinking, deciding and initiating actions: the ‘global brain’. Individuals are being integrated ever more tightly into this collective intelligence. Although this image may raise worries about a totalitarian system that restricts individual initiaSocial Evolution & History / March 2007 58 tive, the superorganism model points in the opposite direction, towards increasing freedom and diversity. The model further suggests some specific futurological predictions for the coming decades, such as the emergence of an automated distribution network, a computer immune system, and a global consensus about values and standards.", "title": "" }, { "docid": "8295573eb8533e560fb8d14163191745", "text": "Line drawings play an important role in shape description due to they can convey meaningful information by outstanding the key component and distracting details or ignoring less important. Suggestive contours are a type of lines to produce high quality line drawings. To generate those contours, we can generally start from two aspects: from image space or object space. The image space strategies can not only extract suggestive contours much faster than object space methods, but also don't require the information of the 3D objects. However they are sensitive to small noise, which is ubiquitous in the digital image. In this paper, before extracting lines, we apply an accelerated structure-preserving local Laplacian filter to smooth the shaded image. Through our experiments, we draw the conclusion that our method can effectively suppress the redundant details, generating a cleaner, higher quality line drawing by image space methods, and can compare with the result by object space ones.", "title": "" }, { "docid": "ce71b390fb70bf17186bbd1f6233b085", "text": "This report provides detailed description and necessary derivations for the BackPropagation Through Time (BPTT) algorithm. BPTT is often used to learn recurrent neural networks (RNN). Contrary to feed-forward neural networks, the RNN is characterized by the ability of encoding longer past information, thus very suitable for sequential models. The BPTT extends the ordinary BP algorithm to suit the recurrent neural architecture. 1 Basic Definitions For a two-layer feed-forward neural network, we notate the input layer as x indexed by variable i, the hidden layer as s indexed by variable j, and the output layer as y indexed by variable k. The weight matrix that map the input vector to the hidden layer is V, while the hidden layer is propagated through the weight matrix W, to the output layer. In a simple recurrent neural network, we attach every neural layer a time subscript t. The input layer consists of two components, x(t) and the privious activation of the hidden layer s(t − 1) indexed by variable h. The corresponding weight matrix is U. Table 1 lists all the notations used in this report: Neural layer Description Index variable x(t) input layer i s(t− 1) previous hidden (state) layer h s(t) hidden (state) layer j y(t) output layer k Weight matrix Description Index variables V Input layer → Hidden layer i, j U Previous hidden layer → Hidden layer h, j W Hidden layer → Output layer j, k Table 1: Notations in the recurrent neural network. Then, the recurrent neural network can be processed as the following: • Input layer → Hidden layer sj(t) = f(netj(t)) (1)", "title": "" }, { "docid": "da63c4d9cc2f3278126490de54c34ce5", "text": "The growth of Web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences. Our research looks particularly at assigning trust in Web-based social networks and investigates how trust information can be mined and integrated into applications. This article introduces a definition of trust suitable for use in Web-based social networks with a discussion of the properties that will influence its use in computation. We then present two algorithms for inferring trust relationships between individuals that are not directly connected in the network. Both algorithms are shown theoretically and through simulation to produce calculated trust values that are highly accurate.. We then present TrustMail, a prototype email client that uses variations on these algorithms to score email messages in the user's inbox based on the user's participation and ratings in a trust network.", "title": "" }, { "docid": "6105d4250286a7a90fe20e6b1ec8a6d3", "text": "A well-known attack on RSA with low secret-exponent d was given by Wiener about 15 years ago. Wiener showed that using continued fractions, one can efficiently recover the secret-exponent d from the public key (N, e) as long as d < N. Interestingly, Wiener stated that his attack may sometimes also work when d is slightly larger than N . This raises the question of how much larger d can be: could the attack work with non-negligible probability for d = N 1/4+ρ for some constant ρ > 0? We answer this question in the negative by proving a converse to Wiener’s result. Our result shows that, for any fixed > 0 and all sufficiently large modulus lengths, Wiener’s attack succeeds with negligible probability over a random choice of d < N δ (in an interval of size Ω(N )) as soon as δ > 1/4 + . Thus Wiener’s success bound d < N 1/4 for his algorithm is essentially tight. We also obtain a converse result for a natural class of extensions of the Wiener attack, which are guaranteed to succeed even when δ > 1/4. The known attacks in this class (by Verheul and Van Tilborg and Dujella) run in exponential time, so it is natural to ask whether there exists an attack in this class with subexponential run-time. Our second converse result answers this question also in the negative.", "title": "" }, { "docid": "349f85e6ffd66d6a1dd9d9c6925d00bc", "text": "Wearable computers have the potential to act as intelligent agents in everyday life and assist the user in a variety of tasks, using context to determine how to act. Location is the most common form of context used by these agents to determine the user’s task. However, another potential use of location context is the creation of a predictive model of the user’s future movements. We present a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales. These locations are then incorporated into a Markov model that can be consulted for use with a variety of applications in both single–user and collaborative scenarios.", "title": "" }, { "docid": "df487337795d03d8538024aedacbbbe9", "text": "This study aims to make an inquiry regarding the advantages and challenges of integrating augmented reality (AR) into the library orientation programs of academic/research libraries. With the vast number of emerging technologies that are currently being introduced to the library world, it is essential for academic librarians to fully utilize these technologies to their advantage. However, it is also of equal importance for them to first make careful analysis and research before deciding whether to adopt a certain technology or not. AR offers a strategic medium through which librarians can attach digital information to real-world objects and simply let patrons interact with them. It is a channel that librarians can utilize in order to disseminate information and guide patrons in their studies or researches. And while it is expected for AR to grow tremendously in the next few years, it becomes more inevitable for academic librarians to acquire related IT skills in order to further improve the services they offer in their respective colleges and universities. The study shall employ the pragmatic approach to research, conducting an extensive review of available literature on AR as used in academic libraries, designing a prototype to illustrate how AR can be integrated to an existing library orientation program, and performing surveys and interviews on patrons and librarians who used it. This study can serve as a guide in order for academic librarians to assess whether implementing AR in their respective libraries will be beneficial to them or not.", "title": "" }, { "docid": "5d431ca5cf18a66f158ec5c8058be50f", "text": "How could a rearranging chair convince you to let it by? This paper explores how robotic chairs might negotiate passage in shared spaces with people, using motion as an expressive cue. The user study evaluates the efficacy of three gestures at convincing a busy participant to let it by. This within-participants study consisted of three subsequent trials, in which a person is completing a puzzle on a standing desk and a robotic chair approaches to squeeze by. The measure was whether participants moved out of the robot's way or not. People deferred to the robot in slightly less than half the trials as they were engaged in the activity. The main finding, however, is that over-communication cues more blocking behaviors, perhaps because it is annoying or because people want chairs to know their place (socially speaking). The Forward-Back gesture that was most effective at negotiating passage in the first trail was least effective in the second and third trial. The more subtle Pause and the slightly loud but less-aggressive Side-to-Side gesture, were much more likely to be deferred to in later trials, but not a single participant deferred to them in the first trial. The results demonstrate that the Forward-Back gesture was the clearest way to communicate the robot's intent, however, they also give evidence that there is a communicative trade-off between clarity and politeness, particularly when direct communication has an association with aggression. The takeaway for robot design is: be informative initially, but avoid over-communicating later.", "title": "" }, { "docid": "d8fe42a18648ef0aec23fc34b27cea02", "text": "Clinical placements are essential for students to develop clinical skills to qualify as nurses. However, various difficulties encountered by nursing students during their clinical education detract from developing clinical competencies. This constructivist grounded theory study aims to explore nursing students' experiences in clinical nursing education, and to identify the factors that influence the clinical education students receive. Twenty-one individual and six group semi-structured interviews were conducted with sixteen fourth year nursing students and four registered nurses. This research identified six factors that influence nursing students' clinical education: interpersonal, socio-cultural, instructional, environmental, emotional and physical factors. The research has developed a dynamic model of learning in clinical contexts, which offers opportunities to understand how students' learning is influenced multifactorially during clinical placements. The understanding and application of the model can improve nursing instructional design, and subsequently, nursing students' learning in clinical contexts.", "title": "" }, { "docid": "4aac8bed4ddd3707c5b391d2025425c9", "text": "Grouping images into (semantically) meaningful categories using low-level visual features is a challenging and important problem in content-based image retrieval. Using binary Bayesian classifiers, we attempt to capture high-level concepts from low-level image features under the constraint that the test image does belong to one of the classes. Specifically, we consider the hierarchical classification of vacation images; at the highest level, images are classified as indoor or outdoor; outdoor images are further classified as city or landscape; finally, a subset of landscape images is classified into sunset, forest, and mountain classes. We demonstrate that a small vector quantizer (whose optimal size is selected using a modified MDL criterion) can be used to model the class-conditional densities of the features, required by the Bayesian methodology. The classifiers have been designed and evaluated on a database of 6931 vacation photographs. Our system achieved a classification accuracy of 90.5% for indoor/outdoor, 95.3% for city/landscape, 96.6% for sunset/forest and mountain, and 96% for forest/mountain classification problems. We further develop a learning method to incrementally train the classifiers as additional data become available. We also show preliminary results for feature reduction using clustering techniques. Our goal is to combine multiple two-class classifiers into a single hierarchical classifier.", "title": "" }, { "docid": "0aa8a611e7ea7934e52a1cb2cd46a579", "text": "The software defined networking (SDN) paradigm promises to dramatically simplify network configuration and resource management. Such features are extremely valuable to network operators and therefore, the industrial (besides the academic) research and development community is paying increasing attention to SDN. Although wireless equipment manufacturers are increasing their involvement in SDN-related activities, to date there is not a clear and comprehensive understanding of what are the opportunities offered by SDN in most common networking scenarios involving wireless infrastructureless communications and how SDN concepts should be adapted to suit the characteristics of wireless and mobile communications. This paper is a first attempt to fill this gap as it aims at analyzing how SDN can be beneficial in wireless infrastructureless networking environments with special emphasis on wireless personal area networks (WPAN). Furthermore, a possible approach (called SDWN) for such environments is presented and some design guidelines are provided.", "title": "" }, { "docid": "83c1d0b0a1edc48ccc051b8848e6703e", "text": "Recently, hashing video contents for fast retrieval has received increasing attention due to the enormous growth of online videos. As the extension of image hashing techniques, traditional video hashing methods mainly focus on seeking the appropriate video features but pay little attention to how the video-specific features can be leveraged to achieve optimal binarization. In this paper, an end-to-end hashing framework, namely Unsupervised Deep Video Hashing (UDVH), is proposed, where feature extraction, balanced code learning and hash function learning are integrated and optimized in a self-taught manner. Particularly, distinguished from previous work, our framework enjoys two novelties: 1) an unsupervised hashing method that integrates the feature clustering and feature binarization, enabling the neighborhood structure to be preserved in the binary space; 2) a smart rotation applied to the video-specific features that are widely spread in the low-dimensional space such that the variance of dimensions can be balanced, thus generating more effective hash codes. Extensive experiments have been performed on two realworld datasets and the results demonstrate its superiority, compared to the state-of-the-art video hashing methods. To bootstrap further developments, the source code will be made publically available.", "title": "" }, { "docid": "7f5c0caabf14eeebdc412d3f31939bb4", "text": "Human actions comprise of joint motion of articulated body parts or “gestures”. Human skeleton is intuitively represented as a sparse graph with joints as nodes and natural connections between them as edges. Graph convolutional networks have been used to recognize actions from skeletal videos. We introduce a part-based graph convolutional network (PB-GCN) for this task, inspired by Deformable Part-based Models (DPMs). We divide the skeleton graph into four subgraphs with joints shared across them and learn a recognition model using a part-based graph convolutional network. We show that such a model improves performance of recognition, compared to a model using entire skeleton graph. Instead of using 3D joint coordinates as node features, we show that using relative coordinates and temporal displacements boosts performance. Our model achieves state-of-the-art performance on two challenging benchmark datasets NTURGB+D and HDM05, for skeletal action recognition.", "title": "" }, { "docid": "f34e0d226da243a2752bb65c0174f0c9", "text": "We used echo state networks, a subclass of recurrent neural networks, to predict stock prices of the S&P 500. Our network outperformed a Kalman filter, predicting more of the higher frequency fluctuations in stock price. The Challenge of Time Series Prediction Learning from past history is a fudamentality ill-posed. A model may fit past data well but not perform well when presented with new inputs. With recurrent neural networks (RNNs), we leverage the modeling abilities of neural networks (NNs) for time series forecastings. Feedforward NNs have done well in classification tasks such as handwriting recognition, however in dynamical environments, we need techniques that account for history. In RNNs, signals passing through recurrent connections constitute an effective memory for the network, which can then use information in memory to better predict future time series values. Unfortunately, RNNs are difficult to train. Traditional techniques used with feedforward NNs such as backpropagation fail to yield acceptable performance. However, subsets of RNNs that are more amenable to training have been developed in the emerging field known as reservoir computing. In reservoir computing, the recurrent connections of the network are viewed as a fixed reservoir used to map inputs into a high dimensional, dynamical space–a similar idea to the support vector machine. With a sufficiently high dimensional space, a simple linear decode can be used to approximate any function varying with time. Two reservoir networks known as Echo State Networks (ESNs) and Liquid State Machines (LSMs) have met with success in modeling nonlinear dynamical systems [2, 4]. We focus on the former, ESN, in this project and use it to predict stock prices and compare its performance to a Kalman filter. In an ESN, only the output weights are trained (see Figure 1). Echo State Network Implementation The state vector, x(t), of the network is governed by x(t+ 1) = f ( W u(t) +Wx(t) +W y(t) ) , (1) where f(·) = tanh(·), W in describes the weights connecting the inputs to the network, u(t) is the input vector, W describes the recurrent weights, W fb describes the feedback weights connecting the outputs back to the network, and y(t) are the outputs. The output y(t) is governed by y(t) = W z(t), where z(t) = [x(t),u(t)] is the extended state. By including the input vector, the extended state allows the network to use a linear combination of the inputs in addition to the state to form the output. ESN creation follows the procedure outlined in [3]. Briefly, 1. Initialize network of N reservoir units with random W , W , and W .", "title": "" }, { "docid": "b99b9f80b4f0ca4a8d42132af545be76", "text": "By: Catherine L. Anderson Decision, Operations, and Information Technologies Department Robert H. Smith School of Business University of Maryland Van Munching Hall College Park, MD 20742-1815 U.S.A. Catherine_Anderson@rhsmith.umd.edu Ritu Agarwal Center for Health Information and Decision Systems University of Maryland 4327 Van Munching Hall College Park, MD 20742-1815 U.S.A. ragarwal@rhsmith.umd.edu", "title": "" }, { "docid": "804b320c6f5b07f7f4d7c5be29c572e9", "text": "Softmax is the most commonly used output function for multiclass problems and is widely used in areas such as vision, natural language processing, and recommendation. A softmax model has linear costs in the number of classes which makes it too expensive for many real-world problems. A common approach to speed up training involves sampling only some of the classes at each training step. It is known that this method is biased and that the bias increases the more the sampling distribution deviates from the output distribution. Nevertheless, almost all recent work uses simple sampling distributions that require a large sample size to mitigate the bias. In this work, we propose a new class of kernel based sampling methods and develop an efficient sampling algorithm. Kernel based sampling adapts to the model as it is trained, thus resulting in low bias. It can also be easily applied to many models because it relies only on the model’s last hidden layer. We empirically study the trade-off of bias, sampling distribution and sample size and show that kernel based sampling results in low bias with few samples.", "title": "" }, { "docid": "eae0f8a921b301e52c822121de6c6b58", "text": "Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated/Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7% mIoU on PASCAL-Context, 85.9% mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpasses the winning entry of COCO-Place Challenge 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45%, which is comparable with state-of-the-art approaches with over 10× more layers. The source code for the complete system are publicly available1.", "title": "" }, { "docid": "9b2ea0917b96987f52a8595df46caeaf", "text": "A low-complexity metallic tapered slot antenna (TSA) array for millimeter-wave multibeam massive multiple-input multiple-output communication is proposed in this paper. Good beamforming performance can be achieved by the developed antenna array because the element spacing can easily meet the requirement of half-wavelength in the H-plane. The antenna element is fed by a substrate-integrated waveguide, which can be directly integrated with the millimeter-wave circuits. The proposed TSA is fabricated and measured. Measured results show that the reflection coefficient is lower than −15 dB Voltage Standing Wave Ratio ((VSWR) ≤ 1.45) within the frequency range from 22.5 to 32 GHz, which covers the 24.25–27.5-GHz band proposed by International Telecommunications Union (ITU) and the 27.5–28.35-GHz band proposed by Federal Communications Commission (FCC) for 5G. The gain of the antenna element varies from 8.2 to 9.6 dBi over the frequency range of 24–32 GHz. The simulated and measured results also illustrate good radiation patterns across the wide frequency band (24–32 GHz). A $1\\times 4$ H-plane array integrated with the multichannel millimeter-wave transceivers on one PCB is demonstrated and excellent performance is achieved.", "title": "" }, { "docid": "5054ad32c33dc2650c1dcee640961cd5", "text": "Benchmarks have played a vital role in the advancement of visual object recognition and other fields of computer vision (LeCun et al., 1998; Deng et al., 2009; ). The challenges posed by these standard datasets have helped identify and overcome the shortcomings of existing approaches, and have led to great advances of the state of the art. Even the recent massive increase of interest in deep learning methods can be attributed to their success in difficult benchmarks such as ImageNet (Krizhevsky et al., 2012; LeCun et al., 2015). Neuromorphic vision uses silicon retina sensors such as the dynamic vision sensor (DVS; Lichtsteiner et al., 2008). These sensors and their DAVIS (Dynamic and Activepixel Vision Sensor) and ATIS (Asynchronous Time-based Image Sensor) derivatives (Brandli et al., 2014; Posch et al., 2014) are inspired by biological vision by generating streams of asynchronous events indicating local log-intensity brightness changes. They thereby greatly reduce the amount of data to be processed, and their dynamic nature makes them a good fit for domains such as optical flow, object tracking, action recognition, or dynamic scene understanding. Compared to classical computer vision, neuromorphic vision is a younger and much smaller field of research, and lacks benchmarks, which impedes the progress of the field. To address this we introduce the largest event-based vision benchmark dataset published to date, hoping to satisfy a growing demand and stimulate challenges for the community. In particular, the availability of such benchmarks should help the development of algorithms processing event-based vision input, allowing a direct fair comparison of different approaches. We have explicitly chosen mostly dynamic vision tasks such as action recognition or tracking, which could benefit from the strengths of neuromorphic vision sensors, although algorithms that exploit these features are largely missing. A major reason for the lack of benchmarks is that currently neuromorphic vision sensors are only available as R&D prototypes. Nonetheless, there are several datasets already available; see Tan et al. (2015) for an informative review. Unlabeled DVS data was made available around 2007 in the jAER project1 and was used for development of spike timing-based unsupervised feature learning e.g., in Bichler et al. (2012). The first labeled and published event-based neuromorphic vision sensor benchmarks were created from the MNIST digit recognition dataset by jiggling the image on the screen (see Serrano-Gotarredona and Linares-Barranco, 2015 for an informative history) and later to reduce frame artifacts by jiggling the camera view with a pan-tilt unit (Orchard et al., 2015). These datasets automated the scene movement necessary to generate DVS output from the static images, and will be an important step forward for evaluating neuromorphic object recognition systems such as spiking deep networks (Pérez-Carrasco et al., 2013; O’Connor et al., 2013; Cao et al., 2014; Diehl et al., 2015), which so far have been tested mostly on static image datasets converted", "title": "" }, { "docid": "3169a294b91fffeea4479fb3c1baa6eb", "text": "An ultra-wideband (UWB) compact slot antenna with a directional radiation pattern is presented in this communication. The concept is based on dielectric loaded multi-element slot antennas. Wide bandwidth operation is achieved using a driven wide slot antenna fed via an off-centered microstrip line capable of creating a fictitious short along the slot and a number of parasitic antenna elements. The proposed slot antenna uses a graded index superstrate with tapered dielectric constants, from high index to low index , in order to further improve the bandwidth and achieve directional radiation pattern. The superstrate dimensions are carefully chosen so that a dielectric resonator mode is excited to provide radiation at the lowest frequency. A sensitivity study is carried out to optimize the geometric parameters of the slot antennas and the graded index superstrate in order to achieve the maximum bandwidth as well as an unidirectional and frequency invariant radiation pattern. Through this optimization, a compact antenna is designed, fabricated, and tested to show a VSWR value of lower than 2.5 across a 2.9:1 frequency range whose dimensions are 0.27 λ×0.2 λ×0.068 λ at the lowest frequency of operation.", "title": "" } ]
scidocsrr
01b11cd2f443e7f2d30ecae5dbc77cf4
From Question to Text: Question-Oriented Feature Attention for Answer Selection
[ { "docid": "599d814fd3b3a758f3b2459b74aeb92c", "text": "Relation classification is a crucial ingredient in numerous information extraction systems seeking to mine structured facts from text. We propose a novel convolutional neural network architecture for this task, relying on two levels of attention in order to better discern patterns in heterogeneous contexts. This architecture enables endto-end learning from task-specific labeled data, forgoing the need for external knowledge such as explicit dependency structures. Experiments show that our model outperforms previous state-of-the-art methods, including those relying on much richer forms of prior knowledge.", "title": "" }, { "docid": "97838cc3eb7b31d49db6134f8fc81c84", "text": "We study the problem of semi-supervised question answering—-utilizing unlabeled text to boost the performance of question answering models. We propose a novel training framework, the Generative Domain-Adaptive Nets. In this framework, we train a generative model to generate questions based on the unlabeled text, and combine model-generated questions with human-generated questions for training question answering models. We develop novel domain adaptation algorithms, based on reinforcement learning, to alleviate the discrepancy between the modelgenerated data distribution and the humangenerated data distribution. Experiments show that our proposed framework obtains substantial improvement from unlabeled text.", "title": "" } ]
[ { "docid": "60718ad958d65eb60a520d516f1dd4ea", "text": "With the advent of the Internet, more and more public universities in Malaysia are putting in effort to introduce e-learning in their respective universities. Using a structured questionnaire derived from the literature, data was collected from 250 undergraduate students from a public university in Penang, Malaysia. Data was analyzed using AMOS version 16. The results of the structural equation model indicated that service quality (β = 0.20, p < 0.01), information quality (β = 0.37, p < 0.01) and system quality (β = 0.20, p < 0.01) were positively related to user satisfaction explaining a total of 45% variance. The second regression analysis was to examine the impact of user satisfaction on continuance intention. The results showed that satisfaction (β = 0.31, p < 0.01), system quality (β = 0.18, p < 0.01) and service quality (β = 0.30, p < 0.01) were positively related to continuance intention explaining 44% of the variance. Implications from these findings to e-learning system developers and implementers were further elaborated.", "title": "" }, { "docid": "8404b6b5abcbb631398898e81beabea1", "text": "As a result of agricultural intensification, more food is produced today than needed to feed the entire world population and at prices that have never been so low. Yet despite this success and the impact of globalization and increasing world trade in agriculture, there remain large, persistent and, in some cases, worsening spatial differences in the ability of societies to both feed themselves and protect the long-term productive capacity of their natural resources. This paper explores these differences and develops a countryxfarming systems typology for exploring the linkages between human needs, agriculture and the environment, and for assessing options for addressing future food security, land use and ecosystem service challenges facing different societies around the world.", "title": "" }, { "docid": "59d3a3ec644d8554cbb2a5ac75a329f8", "text": "Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis. Inductive Logic Programming (ILP) performs relational learning either directly by manipulating first-order rules or through propositionalization, which translates the relational task into an attribute-value learning task by representing subsets of relations as features. In this paper, we introduce a fast method and system for relational learning based on a novel propositionalization called Bottom Clause Propositionalization (BCP). Bottom clauses are boundaries in the hypothesis search space used by ILP systems Progol and Aleph. Bottom clauses carry semantic meaning and can be mapped directly onto numerical vectors, simplifying the feature extraction process. We have integrated BCP with a well-known neural-symbolic system, C-IL2P, to perform learning from numerical vectors. C-IL2P uses background knowledge in the form of propositional logic programs to build a neural network. The integrated system, which we call CILP++, handles first-order logic knowledge and is available for download from Sourceforge. We have evaluated CILP++ on seven ILP datasets, comparing results with Aleph and a well-known propositionalization method, RSD. The results show that CILP++ can achieve accuracy comparable to Aleph, while being generally faster, BCP achieved statistically significant improvement in accuracy in comparison with RSD when running with a neural network, but BCP and RSD perform similarly when running with C4.5. We have also extended CILP++ to include a statistical feature selection method, mRMR, with preliminary results indicating that a reduction of more than 90 % of features can be achieved with a small loss of accuracy.", "title": "" }, { "docid": "824b0e8a66699965899169738df7caa9", "text": "Much recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. In this paper we investigate whether this direct approach succeeds due to, or despite, the fact that it avoids the explicit representation of high-level information. We propose a method of incorporating high-level concepts into the successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. We achieve the best reported results on both image captioning and VQA on several benchmark datasets, and provide an analysis of the value of explicit high-level concepts in V2L problems.", "title": "" }, { "docid": "9320c30f963db9eb99fe429278ef05fa", "text": "A small DC-link capacitor based drive is presented in this paper. The drive shows negative impedance instability at operating points with high power load. A phase portrait is presented for input filter states which exhibit a limit cycle. When the drive is operated with unbalanced input supply voltages, the rectified voltage contains all even harmonics frequencies. However, it is shown that the dominant harmonic component of the DC-link voltage is decided by the limit cycle instead of the input filter resonance frequency. An active damping technique is used to stabilize the operating point. The responses of the DC-link voltage with and without active damping are presented. The low order harmonics components are reduced with the increase in the gain of the active damping term. The experimental results for the DC-link voltage, input phase currents, and machine phase current are presented.", "title": "" }, { "docid": "3b29c8b2d3f33f92d8a449f6bfb65614", "text": "With the development of cloud computing and mobility, mobile cloud computing has emerged and become a focus of research. By the means of on-demand self-service and extendibility, it can offer the infrastructure, platform, and software services in a cloud to mobile users through the mobile network. Security and privacy are the key issues for mobile cloud computing applications, and still face some enormous challenges. In order to facilitate this emerging domain, we firstly in brief review the advantages and system model of mobile cloud computing, and then pay attention to the security and privacy in the mobile cloud computing. By deeply analyzing the security and privacy issues from three aspects: mobile terminal, mobile network and cloud, we give the current security and privacy approaches.", "title": "" }, { "docid": "0f1fab536992282dd1027d542c2c20e5", "text": "Traditional barcode localization methods based on image analysis are sensitive to the types of the target symbols and the environment they are applied to. To develop intelligent barcode reading system used in industry, a real-time region based barcode segmentation approach which is available for various types of linear and two-dimensional symbols is proposed. The two-stage approach consists of the target region connection part by orientation detection and morphological operation, and the target region detection part by efficient contour based connected component labeling. The result of target location, which is represented by the coordinates of the upper left corner and the lower right corner of its bounding box, is robust to the barcode orientation, noise, and uneven environment illumination. The segmentation method is proved to work well in the real-time barcode reading system. In the experiments, 100% of the DATAMATRIX codes, 99.3% of the Code39 symbols and 98.7% PDF417 codes are corrected segmented.", "title": "" }, { "docid": "2574576033f9cb0d3d65119d077cf9cf", "text": "In this paper, we introduce a simple, yet powerful pipeline for medical image segmentation that combines Fully Convolutional Networks (FCNs) with Fully Convolutional Residual Networks (FC-ResNets). We propose and examine a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets. Our approach focuses upon the importance of a trainable pre-processing when using FC-ResNets and we show that a low-capacity FCN model can serve as a pre-processor to normalize medical input data. In our image segmentation pipeline, we use FCNs to obtain normalized images, which are then iteratively refined by means of a FC-ResNet to generate a segmentation prediction. As in other fully convolutional approaches, our pipeline can be used off-the-shelf on different image modalities. We show that using this pipeline, we exhibit state-of-the-art performance on the challenging Electron Microscopy benchmark, when compared to other 2D methods. We improve segmentation results on CT images of liver lesions, when contrasting with standard FCN methods. Moreover, when applying our 2D pipeline on a challenging 3D MRI prostate segmentation challenge we reach results that are competitive even when compared to 3D methods. The obtained results illustrate the strong potential and versatility of the pipeline by achieving accurate segmentations on a variety of image modalities and different anatomical regions.", "title": "" }, { "docid": "82e5d8a3ee664f36afec3aa1b2e976f9", "text": "Real-world tasks are often highly structured. Hierarchical reinforcement learning (HRL) has attracted research interest as an approach for leveraging the hierarchical structure of a given task in reinforcement learning (RL). However, identifying the hierarchical policy structure that enhances the performance of RL is not a trivial task. In this paper, we propose an HRL method that learns a latent variable of a hierarchical policy using mutual information maximization. Our approach can be interpreted as a way to learn a discrete and latent representation of the state-action space. To learn option policies that correspond to modes of the advantage function, we introduce advantage-weighted importance sampling. In our HRL method, the gating policy learns to select option policies based on an option-value function, and these option policies are optimized based on the deterministic policy gradient method. This framework is derived by leveraging the analogy between a monolithic policy in standard RL and a hierarchical policy in HRL by using a deterministic option policy. Experimental results indicate that our HRL approach can learn a diversity of options and that it can enhance the performance of RL in continuous control tasks.", "title": "" }, { "docid": "4c39b9a4e9822fb6d0a000c55d71faa5", "text": "Suicidal decapitation is seldom encountered in forensic medicine practice. This study reports the analysis of a suicide committed by a 31-year-old man with a self-fabricated guillotine. The construction of the guillotine was very interesting and sophisticated. The guillotine-like blade with additional weight was placed in a large metal frame. The movement of the blade was controlled by the frame rails. The steel blade was triggered by a tensioned rubber band after releasing the safety catch. The cause of death was immediate exsanguination after complete severance of the neck. The suicide motive was most likely emotional distress after the death of his father. In medico-legal literature, there has been only one similar case of suicidal complete decapitation by a guillotine described.", "title": "" }, { "docid": "5ca6f2aaa70a7c7593e68f25999697d8", "text": "Traditional text detection methods mostly focus on quadrangle text. In this study we propose a novel method named sliding line point regression (SLPR) in order to detect arbitrary-shape text in natural scene. SLPR regresses multiple points on the edge of text line and then utilizes these points to sketch the outlines of the text. The proposed SLPR can be adapted to many object detection architectures such as Faster R-CNN and R-FCN. Specifically, we first generate the smallest rectangular box including the text with region proposal network (RPN), then isometrically regress the points on the edge of text by using the vertically and horizontally sliding lines. To make full use of information and reduce redundancy, we calculate x-coordinate or y-coordinate of target point by the rectangular box position, and just regress the remaining y-coordinate or x-coordinate. Accordingly we can not only reduce the parameters of system, but also restrain the points which will generate more regular polygon. Our approach achieved competitive results on traditional ICDAR2015 Incidental Scene Text benchmark and curve text detection dataset CTW1500.", "title": "" }, { "docid": "e61095bf820e170c8c8d6f2212142962", "text": "Today, even low-cost FPGAs provide far more computing power than DSPs. Current FPGAs have dedicated multipliers and even DSP multiply/accumulate (MAC) blocks that enable signals to be processed with clock speeds in excess of 550 MHz. Until now, however, these capabilities were rarely needed in audio signal processing. A serial implementation of an audio algorithm working in the kilohertz range uses exactly the same resources required for processing signals in the three-digit megahertz range. Consequently, programmable logic components such as PLDs or FPGAs are rarely used for processing low-frequency signals. After all, the parallel processing of mathematical operations in hardware is of no benefit when compared to an implementation based on classical DSPs; the sampling rates are so low that most serial DSP implementations are more than adequate. In fact, audio applications are characterized by such a high number of multiplications that they previously could", "title": "" }, { "docid": "d8df668e4f80c356165d816ee454ab5f", "text": "Despite the advances of the electronic technologies in e-learning, a consolidated evaluation methodology for e-learning applications does not yet exist. The goal of e-learning is to offer the users the possibility to become skillful and acquire knowledge on a new domain. The evaluation of educational software must consider its pedagogic effectiveness as well as its usability. The design of its interface should take into account the way students learn and also provide good usability so that student's interactions with the software are as natural and intuitive as possible. In this paper, we present the results obtained from a first phase of observation and analysis of the interactions of people with e-learning applications. The aim is to provide a methodology for evaluating such applications.", "title": "" }, { "docid": "9e67148718b994c60d9b8fce1b18ad17", "text": "Images with high resolution are desirable in many applications such as medical imaging, video surveillance, astronomy etc. In medical imaging, images are obtained for medical investigative purposes and for providing information about the anatomy, the physiologic and metabolic activities of the volume below the skin. Medical imaging is an important diagnosis instrument to determine the presence of certain diseases. Therefore increasing the image resolution should significantly improve the diagnosis ability for corrective treatment. Furthermore, a better resolution may substantially improve automatic detection and image segmentation results. The arrival of digital medical imaging technologies such as Computerized Tomography (CT), Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI) etc. has revolutionized modern medicine. Despite the advances in acquisition technology and the performance of optimized reconstruction algorithms over the two last decades, it is not easy to obtain an image at a desired resolution due to imaging environments, the limitations of physical imaging systems as well as quality-limiting factors such as Noise and Blur. A solution to this problem is the use of Super Resolution (SR) techniques which can be used for processing of such images. Various methods have been described over the years to generate and form algorithms which can be used for building on this concept of Super resolution. This paper details few of the types of medical imaginary, various techniques used to perform super resolution and the current trends which are being followed for the implementation of this concept.", "title": "" }, { "docid": "1bc91b4547481a81c2963dd117a96370", "text": "Breast cancer is one of the main causes of women mortality worldwide. Ultrasonography (USG) is other modalities than mammography that capable to support radiologists in diagnosing breast cancer. However, the diagnosis may come with different interpretation depending on the radiologists experience. Therefore, Computer-Aided Diagnosis (CAD) is developed as a tool for radiologist's second opinion. CAD is built based on digital image processing of ultrasound (US) images which consists of several stages. Lesion segmentation is an important step in CAD system because it contains many important features for classification process related to lesion characteristics. This study provides a performance analysis and comparison of image segmentation for breast USG images. In this paper, several methods are presented such as a comprehensive comparison of adaptive thresholding, fuzzy C-Means (FCM), Fast Global Minimization for Active Contour (FGMAC) and Active Contours Without Edges (ACWE). The performance of these methods are evaluated with evaluation metrics Dice coefficient, Jaccard coefficient, FPR, FNR, Hausdorff distance, PSNR and MSSD parameters. Morphological operation is able to increase the performance of each segmentation methods. Overall, ACWE with morphological operation gives the best performance compare to the other methods with the similarity level of more than 90%.", "title": "" }, { "docid": "21194ad1a912fbf790970fb1dd9630d4", "text": "Lobewise analysis of the pulmonary parenchyma is of clinical relevance for diagnosing and monitoring pathologies. In this work, a fully automatic lobe segmentation approach is presented, which is based on a previously proposed watershed transformation approach. The proposed extension explicitly considers the pulmonary fissures by including them in the cost image for the watershed segmentation. The fissure structures are found through a tailored feature analysis of the Hessian matrix. The method is evaluated using 42 data sets, and a comparison with manual segmentations yields an average volumetric agreement of 96.8%. In comparison to the previously proposed approach, this method increases segmentation accuracy where the fissures are visible.", "title": "" }, { "docid": "0de4fb7e390aab6ebf446bc07118c1d9", "text": "When using a mathematical formula for search (query-by-expression), the suitability of retrieved formulae often depends more upon symbol identities and layout than deep mathematical semantics. Using a Symbol Layout Tree representation for formula appearance, we propose the Maximum Subtree Similarity (MSS) for ranking formulae based upon the subexpression whose symbols and layout best match a query formula. Because MSS is too expensive to apply against a complete collection, the Tangent-3 system first retrieves expressions using an inverted index over symbol pair relationships, ranking hits using the Dice coefficient; the top-k formulae are then re-ranked by MSS. Tangent-3 obtains state-of-the-art performance on the NTCIR-11 Wikipedia formula retrieval benchmark, and is efficient in terms of both space and time. Retrieval systems for other graphical forms, including chemical diagrams, flowcharts, figures, and tables, may benefit from adopting this approach.", "title": "" }, { "docid": "ff386772e3c279c54e1970c6e53682e8", "text": "It has been well established that most operating system crashes are due to bugs in device drivers. Because drivers are normally linked into the kernel address space, a buggy driver can wipe out kernel tables and bring the system crashing to a grinding halt. We have greatly mitigated this problem by reducing the kernel to an absolute minimum and running each driver as a separate, unprivileged user-mode process. In addition, we implemented a POSIX-conformant operating system, MINIX 3, as multiple user-mode servers. In this design, a server or driver failure no longer is fatal and does not require rebooting the computer. This paper discusses how we designed and implemented the system, which problems we encountered, and how we solved these problems. We also discuss the performance effects of our changes and evaluate how our multiserver design improves operating system dependability over monolithic designs", "title": "" }, { "docid": "d2d1f14ca3370d9d87f4d38dd95a7c3b", "text": "Dissidents, journalists, and others require technical means to protect their privacy in the face of compelled access to their digital devices (smartphones, laptops, tablets, etc.). For example, authorities increasingly force disclosure of all secrets, including passwords, to search devices upon national border crossings. We therefore present the design, implementation, and evaluation of a new system to help victims of compelled searches. Our system, called BurnBox, provides self-revocable encryption: the user can temporarily disable their access to specific files stored remotely, without revealing which files were revoked during compelled searches, even if the adversary also compromises the cloud storage service. They can later restore access. We formalize the threat model and provide a construction that uses an erasable index, secure erasure of keys, and standard cryptographic tools in order to provide security supported by our formal analysis. We report on a prototype implementation, which showcases the practicality of BurnBox.", "title": "" } ]
scidocsrr
30a3d0b1d1884e3b6dcfde192afab4af
Visual Sentiment Prediction with Deep Convolutional Neural Networks
[ { "docid": "fcbfa224b2708839e39295f24f4405e1", "text": "A dataset is imbalanced if the classification categories are not approximately equally represented. Recent years brought increased interest in applying machine learning techniques to difficult \"real-world\" problems, many of which are characterized by imbalanced data. Additionally the distribution of the testing data may differ from that of the training data, and the true misclassification costs may be unknown at learning time. Predictive accuracy, a popular choice for evaluating performance of a classifier, might not be appropriate when the data is imbalanced andlor the costs of different errors vary markedly. In this Chapter, we discuss some of the sampling techniques used for balancing the datasets, and the performance measures more appropriate for mining imbalanced datasets.", "title": "" } ]
[ { "docid": "9948ebbd2253021e3af53534619c5094", "text": "This paper presents a novel method to simultaneously estimate the clothed and naked 3D shapes of a person. The method needs only a single photograph of a person wearing clothing. Firstly, we learn a deformable model of human clothed body shapes from a database. Then, given an input image, the deformable model is initialized with a few user-specified 2D joints and contours of the person. And the correspondence between 3D shape and 2D contours is established automatically. Finally, we optimize the parameters of the deformable model in an iterative way, and then obtain the clothed and naked 3D shapes of the person simultaneously. The experimental results on real images demonstrate the effectiveness of our method.", "title": "" }, { "docid": "0d733d7f0782bfaf245bf344a46b58b8", "text": "Smart Cities rely on the use of ICTs for a more efficient and intelligent use of resources, whilst improving citizens' quality of life and reducing the environmental footprint. As far as the livability of cities is concerned, traffic is one of the most frequent and complex factors directly affecting citizens. Particularly, drivers in search of a vacant parking spot are a non-negligible source of atmospheric and acoustic pollution. Although some cities have installed sensor-based vacant parking spot detectors in some neighbourhoods, the cost of this approach makes it unfeasible at large scale. As an approach to implement a sustainable solution to the vacant parking spot detection problem in urban environments, this work advocates fusing the information from small-scale sensor-based detectors with that obtained from exploiting the widely-deployed video surveillance camera networks. In particular, this paper focuses on how video analytics can be exploited as a prior step towards Smart City solutions based on data fusion. Through a set of experiments carefully planned to replicate a real-world scenario, the vacant parking spot detection success rate of the proposed system is evaluated through a critical comparison of local and global visual features (either alone or fused at feature level) and different classifier systems applied to the task. Furthermore, the system is tested under setup scenarios of different complexities, and experimental results show that while local features are best when training with small amounts of highly accurate on-site data, they are outperformed by their global counterparts when training with more samples from an external vehicle database.", "title": "" }, { "docid": "31bd49d9287ceaead298c4543c5b3c53", "text": "In this paper, an experimental self-teaching system capable of superimposing audio-visual information to support the process of learning to play the guitar is proposed. Different learning scenarios have been carefully designed according to diverse levels of experience and understanding and are presented in a simple way. Learners can select between representative numbers of scenarios and physically interact with the audio-visual information in a natural way. Audio-visual information can be placed anywhere on a physical space and multiple sound sources can be mixed to experiment with compositions and compilations. To assess the effectiveness of the system some initial evaluation is conducted. Finally conclusions and future work of the system are summarized. Categories: augmented reality, information visualisation, human-computer interaction, learning.", "title": "" }, { "docid": "6f4d3ab2b3d027fdbae1b7381409265c", "text": "BACKGROUND\nIn 1987 individual states in the USA were allowed to raise speed limits on rural freeways from 55 to 65 mph. Analyses of the impact of the increased speed limits on highway safety have produced conflicting results.\n\n\nOBJECTIVE\nTo determine if the 1987 speed limit increase on Washington State's rural freeways affected the incidence of fatal crashes or all crashes on rural freeways, or affected average vehicle speeds or speed variance.\n\n\nDESIGN\nAn ecological study of crashes and vehicle speeds on Washington State freeways from 1974 through 1994.\n\n\nRESULTS\nThe incidence of fatal crashes more than doubled after 1987, compared with what would have been expected if there had been no speed limit increase, rate ratio 2.1 (95% confidence interval (CI), 1.6-2.7). This resulted in an excess of 26.4 deaths per year on rural freeways in Washington State. The total crash rate did not change substantially, rate ratio 1.1 (95% CI, 1.0-1.3). Average vehicle speed increased by 5.5 mph. Speed variance was not affected by the speed limit increase.\n\n\nCONCLUSIONS\nThe speed limit increase was associated with a higher fatal crash rate and more deaths on freeways in Washington State.", "title": "" }, { "docid": "d7573e7b3aac75b49132076ce9fc83e0", "text": "The prevalent use of social media produces mountains of unlabeled, high-dimensional data. Feature selection has been shown effective in dealing with high-dimensional data for efficient data mining. Feature selection for unlabeled data remains a challenging task due to the absence of label information by which the feature relevance can be assessed. The unique characteristics of social media data further complicate the already challenging problem of unsupervised feature selection, (e.g., part of social media data is linked, which makes invalid the independent and identically distributed assumption), bringing about new challenges to traditional unsupervised feature selection algorithms. In this paper, we study the differences between social media data and traditional attribute-value data, investigate if the relations revealed in linked data can be used to help select relevant features, and propose a novel unsupervised feature selection framework, LUFS, for linked social media data. We perform experiments with real-world social media datasets to evaluate the effectiveness of the proposed framework and probe the working of its key components.", "title": "" }, { "docid": "8bb0077bf14426f02a6339dd1be5b7f2", "text": "Astrocytes are thought to play a variety of key roles in the adult brain, such as their participation in synaptic transmission, in wound healing upon brain injury, and adult neurogenesis. However, to elucidate these functions in vivo has been difficult because of the lack of astrocyte-specific gene targeting. Here we show that the inducible form of Cre (CreERT2) expressed in the locus of the astrocyte-specific glutamate transporter (GLAST) allows precisely timed gene deletion in adult astrocytes as well as radial glial cells at earlier developmental stages. Moreover, postnatal and adult neurogenesis can be targeted at different stages with high efficiency as it originates from astroglial cells. Taken together, this mouse line will allow dissecting the molecular pathways regulating the diverse functions of astrocytes as precursors, support cells, repair cells, and cells involved in neuronal information processing.", "title": "" }, { "docid": "413d0b457cc1b96bf65d8a3e1c98ed41", "text": "Peer-to-peer (P2P) lending is a fast growing financial technology (FinTech) trend that is displacing traditional retail banking. Studies on P2P lending have focused on predicting individual interest rates or default probabilities. However, the relationship between aggregated P2P interest rates and the general economy will be of interest to investors and borrowers as the P2P credit market matures. We show that the variation in P2P interest rates across grade types are determined by three macroeconomic latent factors formed by Canonical Correlation Analysis (CCA) — macro default, investor uncertainty, and the fundamental value of the market. However, the variation in P2P interest rates across term types cannot be explained by the general economy.", "title": "" }, { "docid": "c2ed6ac38a6014db73ba81dd898edb97", "text": "The ability of personality traits to predict important life outcomes has traditionally been questioned because of the putative small effects of personality. In this article, we compare the predictive validity of personality traits with that of socioeconomic status (SES) and cognitive ability to test the relative contribution of personality traits to predictions of three critical outcomes: mortality, divorce, and occupational attainment. Only evidence from prospective longitudinal studies was considered. In addition, an attempt was made to limit the review to studies that controlled for important background factors. Results showed that the magnitude of the effects of personality traits on mortality, divorce, and occupational attainment was indistinguishable from the effects of SES and cognitive ability on these outcomes. These results demonstrate the influence of personality traits on important life outcomes, highlight the need to more routinely incorporate measures of personality into quality of life surveys, and encourage further research about the developmental origins of personality traits and the processes by which these traits influence diverse life outcomes.", "title": "" }, { "docid": "ddacac895fb99d57f2235f963f650e6c", "text": "Web applications evolved in the last decades from simple scripts to multi-functional applications. Such complex web applications are prone to different types of security vulnerabilities that lead to data leakage or a compromise of the underlying web server. So called secondorder vulnerabilities occur when an attack payload is first stored by the application on the web server and then later on used in a security-critical operation. In this paper, we introduce the first automated static code analysis approach to detect second-order vulnerabilities and related multi-step exploits in web applications. By analyzing reads and writes to memory locations of the web server, we are able to identify unsanitized data flows by connecting input and output points of data in persistent data stores such as databases or session data. As a result, we identified 159 second-order vulnerabilities in six popular web applications such as the conference management systems HotCRP and OpenConf. Moreover, the analysis of web applications evaluated in related work revealed that we are able to detect several critical vulnerabilities previously missed.", "title": "" }, { "docid": "498b9aef490e19842735f32410e809df", "text": "Human activity recognition using wearable sensors is an area of interest for various domains like healthcare, surveillance etc. Various approaches have been used to solve the problem of activity recognition. Recently deep learning methods like RNNs and LSTMs have been used for this task. But these architectures are unable to capture long term dependencies in time series data. In this work, we propose to use the Temporal Convolutional Network architecture for recognizing the activities from the sensor data obtained from a smartphone. Due to the potential of the architecture to take variable length input sequences along with significantly better ability to capture long term dependencies, it performs better than other deep learning methods. The results of the proposed methods shows an improved performance over the existing methods.", "title": "" }, { "docid": "b38adfeec4e495fdb0fd4cf98b7259a6", "text": "Task switch cost (the deficit of performing a new task vs. a repeated task) has been partly attributed to priming of the repeated task, as well as to inappropriate preparation for the switched task. In the present study, we examined the nature of the priming effect by repeating stimulus-related processes, such as stimulus encoding or stimulus identification. We adopted a partial-overlap task-switching paradigm, in which only stimulus-related processes should be repeated or switched. The switch cost in this partial-overlap condition was smaller than the cost in the full-overlap condition, in which the task overlap involved more than stimulus processing, indicating that priming of a stimulus is a component of a switch cost. The switch cost in the partial-overlap condition, however, disappeared eventually with a long interval between two tasks, whereas the cost in the full-overlap condition remained significant. Moreover, the switch cost, in general, did not interact with foreknowledge, suggesting that preparation on the basis of foreknowledge may be related to processes beyond stimulus encoding. These results suggest that stimulus-related priming is automatic and short-lived and, therefore, is not a part of the persisting portion of switch cost.", "title": "" }, { "docid": "6f0b8b18689afb9b4ac7466b7898a8e8", "text": "BACKGROUND\nApproximately 60 million people in the United States live with one of four chronic conditions: heart disease, diabetes, chronic respiratory disease, and major depression. Anxiety and depression are very common comorbidities in COPD and have significant impact on patients, their families, society, and the course of the disease.\n\n\nMETHODS\nWe report the proceedings of a multidisciplinary workshop on anxiety and depression in COPD that aimed to shed light on the current understanding of these comorbidities, and outline unanswered questions and areas of future research needs.\n\n\nRESULTS\nEstimates of prevalence of anxiety and depression in COPD vary widely but are generally higher than those reported in some other advanced chronic diseases. Untreated and undetected anxiety and depressive symptoms may increase physical disability, morbidity, and health-care utilization. Several patient, physician, and system barriers contribute to the underdiagnosis of these disorders in patients with COPD. While few published studies demonstrate that these disorders associated with COPD respond well to appropriate pharmacologic and nonpharmacologic therapy, only a small proportion of COPD patients with these disorders receive effective treatment.\n\n\nCONCLUSION\nFuture research is needed to address the impact, early detection, and management of anxiety and depression in COPD.", "title": "" }, { "docid": "4c711149abc3af05a8e55e52eefddd97", "text": "Scanning a halftone image introduces halftone artifacts, known as Moire patterns, which significantly degrade the image quality. Printers that use amplitude modulation (AM) screening for halftone printing position dots in a periodic pattern. Therefore, frequencies relating half toning arc easily identifiable in the frequency domain. This paper proposes a method for de screening scanned color halftone images using a custom band reject filter designed to isolate and remove only the frequencies related to half toning while leaving image edges sharp without image segmentation or edge detection. To enable hardware acceleration, the image is processed in small overlapped windows. The windows arc filtered individually in the frequency domain, then pieced back together in a method that does not show blocking artifacts.", "title": "" }, { "docid": "afaa988666cc6b2790696bbb0d69ff73", "text": "Despite being one of the most popular tasks in lexical semantics, word similarity has often been limited to the English language. Other languages, even those that are widely spoken such as Spanish, do not have a reliable word similarity evaluation framework. We put forward robust methodologies for the extension of existing English datasets to other languages, both at monolingual and cross-lingual levels. We propose an automatic standardization for the construction of cross-lingual similarity datasets, and provide an evaluation, demonstrating its reliability and robustness. Based on our procedure and taking the RG-65 word similarity dataset as a reference, we release two high-quality Spanish and Farsi (Persian) monolingual datasets, and fifteen cross-lingual datasets for six languages: English, Spanish, French, German, Portuguese, and Farsi.", "title": "" }, { "docid": "bb7511f4137f487b2b8bf2f6f3f73a6a", "text": "There is extensive evidence indicating that new neurons are generated in the dentate gyrus of the adult mammalian hippocampus, a region of the brain that is important for learning and memory. However, it is not known whether these new neurons become functional, as the methods used to study adult neurogenesis are limited to fixed tissue. We use here a retroviral vector expressing green fluorescent protein that only labels dividing cells, and that can be visualized in live hippocampal slices. We report that newly generated cells in the adult mouse hippocampus have neuronal morphology and can display passive membrane properties, action potentials and functional synaptic inputs similar to those found in mature dentate granule cells. Our findings demonstrate that newly generated cells mature into functional neurons in the adult mammalian brain.", "title": "" }, { "docid": "29d9137c5fdc7e96e140f19acd6dee80", "text": "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the \"proximity\" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures.", "title": "" }, { "docid": "24e78f149b2e42a5c98eb3443c023853", "text": "Cone-beam CT system has become a hot issue in current CT technique. Compared with the traditional 2D CT, cone beam CT can greatly reduce the scanning time, improve the utilization ratio of X-ray, and enhance the spatial resolution. In the article, simulation data based on the 3D Shepp-Logan Model was obtained by tracing the X-ray and applying the radial attenuation theory. FDK (Feldkamp, Davis and Kress) reconstruction algorithm was then adopted to reconstruct the 3D Shepp-Logan Mode. The reconstruction results indicate that for the central image the spatial resolution can reach 8linepairs/mm. Reconstructed images truthfully reveal the archetype.", "title": "" }, { "docid": "f3aaf555028a0c53bec688c0a8e7e95d", "text": "ABSTRACT Translating natural language questions to semantic representations such as SPARQL is a core challenge in open-domain question answering over knowledge bases (KB-QA). Existing methods rely on a clear separation between an offline training phase, where a model is learned, and an online phase where this model is deployed. Two major shortcomings of such methods are that (i) they require access to a large annotated training set that is not always readily available and (ii) they fail on questions from before-unseen domains. To overcome these limitations, this paper presents NEQA, a continuous learning paradigm for KB-QA. Offline, NEQA automatically learns templates mapping syntactic structures to semantic ones from a small number of training question-answer pairs. Once deployed, continuous learning is triggered on cases where templates are insufficient. Using a semantic similarity function between questions and by judicious invocation of non-expert user feedback, NEQA learns new templates that capture previously-unseen syntactic structures. This way, NEQA gradually extends its template repository. NEQA periodically re-trains its underlying models, allowing it to adapt to the language used after deployment. Our experiments demonstrate NEQA’s viability, with steady improvement in answering quality over time, and the ability to answer questions from new domains.", "title": "" }, { "docid": "506743f5b2c98d4a885b342584da8b69", "text": "This thesis presents a general, trainable system for object detection in static images and video sequences. The core system nds a certain class of objects in static images of completely unconstrained, cluttered scenes without using motion, tracking, or handcrafted models and without making any assumptions on the scene structure or the number of objects in the scene. The system uses a set of training data of positive and negative example images as input, transforms the pixel images to a Haar wavelet representation, and uses a support vector machine classi er to learn the di erence between in-class and out-of-class patterns. To detect objects in out-of-sample images, we do a brute force search over all the subwindows in the image. This system is applied to face, people, and car detection with excellent results. For our extensions to video sequences, we augment the core static detection system in several ways { 1) extending the representation to ve frames, 2) implementing an approximation to a Kalman lter, and 3) modeling detections in an image as a density and propagating this density through time according to measured features. In addition, we present a real-time version of the system that is currently running in a DaimlerChrysler experimental vehicle. As part of this thesis, we also present a system that, instead of detecting full patterns, uses a component-based approach. We nd it to be more robust to occlusions, rotations in depth, and severe lighting conditions for people detection than the full body version. We also experiment with various other representations including pixels and principal components and show results that quantify how the number of features, color, and gray-level a ect performance. c Massachusetts Institute of Technology 2000", "title": "" } ]
scidocsrr
ca1f6291672f5740f5a37125c49d166a
Improving Knowledge Graph Embedding Using Simple Constraints
[ { "docid": "5b8b04f29032a6ca94815676d4c4118f", "text": "Representation learning of knowledge graphs aims to encode both entities and relations into a continuous low-dimensional vector space. Most existing methods only concentrate on learning representations with structured information located in triples, regardless of the rich information located in hierarchical types of entities, which could be collected in most knowledge graphs. In this paper, we propose a novel method named Type-embodied Knowledge Representation Learning (TKRL) to take advantages of hierarchical entity types. We suggest that entities should have multiple representations in different types. More specifically, we consider hierarchical types as projection matrices for entities, with two type encoders designed to model hierarchical structures. Meanwhile, type information is also utilized as relation-specific type constraints. We evaluate our models on two tasks including knowledge graph completion and triple classification, and further explore the performances on long-tail dataset. Experimental results show that our models significantly outperform all baselines on both tasks, especially with long-tail distribution. It indicates that our models are capable of capturing hierarchical type information which is significant when constructing representations of knowledge graphs. The source code of this paper can be obtained from https://github.com/thunlp/TKRL.", "title": "" }, { "docid": "18ad179d4817cb391ac332dcbfe13788", "text": "Many papers have been published on the knowledge base completion task in the past few years. Most of these introduce novel architectures for relation learning that are evaluated on standard datasets such as FB15k and WN18. This paper shows that the accuracy of almost all models published on the FB15k can be outperformed by an appropriately tuned baseline — our reimplementation of the DistMult model. Our findings cast doubt on the claim that the performance improvements of recent models are due to architectural changes as opposed to hyperparameter tuning or different training objectives. This should prompt future research to re-consider how the performance of models is evaluated and reported.", "title": "" } ]
[ { "docid": "6a1d534737dcbe75ff7a7ac975bcc5ec", "text": "Crime is one of the most important social problems in the country, affecting public safety, children development, and adult socioeconomic status. Understanding what factors cause higher crime is critical for policy makers in their efforts to reduce crime and increase citizens' life quality. We tackle a fundamental problem in our paper: crime rate inference at the neighborhood level. Traditional approaches have used demographics and geographical influences to estimate crime rates in a region. With the fast development of positioning technology and prevalence of mobile devices, a large amount of modern urban data have been collected and such big data can provide new perspectives for understanding crime. In this paper, we used large-scale Point-Of-Interest data and taxi flow data in the city of Chicago, IL in the USA. We observed significantly improved performance in crime rate inference compared to using traditional features. Such an improvement is consistent over multiple years. We also show that these new features are significant in the feature importance analysis.", "title": "" }, { "docid": "20a90ed3aa2b428b19e85aceddadce90", "text": "Deep learning has been a groundbreaking technology in various fields as well as in communications systems. In spite of the notable advancements of deep neural network (DNN) based technologies in recent years, the high computational complexity has been a major obstacle to apply DNN in practical communications systems which require real-time operation. In this sense, challenges regarding practical implementation must be addressed before the proliferation of DNN-based intelligent communications becomes a reality. To the best of the authors’ knowledge, for the first time, this article presents an efficient learning architecture and design strategies including link level verification through digital circuit implementations using hardware description language (HDL) to mitigate this challenge and to deduce feasibility and potential of DNN for communications systems. In particular, DNN is applied for an encoder and a decoder to enable flexible adaptation with respect to the system environments without needing any domain specific information. Extensive investigations and interdisciplinary design considerations including the DNN-based autoencoder structure, learning framework, and low-complexity digital circuit implementations for real-time operation are taken into account by the authors which ascertains the use of DNN-based communications in practice.", "title": "" }, { "docid": "43397d704a8fc64ec150c847d77280d5", "text": "During the development or maintenance of an Android app, the app developer needs to determine the app's security and privacy requirements such as permission requirements. Permission requirements include two folds. First, what permissions (i.e., access to sensitive resources, e.g., location or contact list) the app needs to request. Second, how to explain the reason of permission usages to users. In this paper, we focus on the multiple challenges that developers face when creating permission-usage explanations. We propose a novel framework, CLAP, that mines potential explanations from the descriptions of similar apps. CLAP leverages information retrieval and text summarization techniques to find frequent permission usages. We evaluate CLAP on a large dataset containing 1.4 million Android apps. The evaluation results outperform existing state-of-the-art approaches, showing great promise of CLAP as a tool for assisting developers and permission requirements discovery.", "title": "" }, { "docid": "4e0a3dd1401a00ddc9d0620de93f4ecc", "text": "The spatial-numerical association of response codes (SNARC) effect is the tendency for humans to respond faster to relatively larger numbers on the left or right (or with the left or right hand) and faster to relatively smaller numbers on the other side. This effect seems to occur due to a spatial representation of magnitude either in occurrence with a number line (wherein participants respond to relatively larger numbers faster on the right), other representations such as clock faces (responses are reversed from number lines), or culturally specific reading directions, begging the question as to whether the effect may be limited to humans. Given that a SNARC effect has emerged via a quantity judgement task in Western lowland gorillas and orangutans (Gazes et al., Cog 168:312–319, 2017), we examined patterns of response on a quantity discrimination task in American black bears, Western lowland gorillas, and humans for evidence of a SNARC effect. We found limited evidence for SNARC effect in American black bears and Western lowland gorillas. Furthermore, humans were inconsistent in direction and strength of effects, emphasizing the importance of standardizing methodology and analyses when comparing SNARC effects between species. These data reveal the importance of collecting data with humans in analogous procedures when testing nonhumans for effects assumed to bepresent in humans.", "title": "" }, { "docid": "e8e658d677a3b1a23650b25edd32fc84", "text": "The aim of the study is to facilitate the suture on the sacral promontory for laparoscopic sacrocolpopexy. We hypothesised that a new method of sacral anchorage using a biosynthetic material, the polyether ether ketone (PEEK) harpoon, might be adequate because of its tensile strength, might reduce complications owing to its well-known biocompatibility, and might shorten the duration of surgery. We verified the feasibility of insertion and quantified the stress resistance of the harpoons placed in the promontory in nine fresh cadavers, using four stress tests in each case. Mean values were analysed and compared using the Wilcoxon and Fisher’s exact tests. The harpoon resists for at least 30 s against a pulling force of 1 N, 5 N and 10 N. Maximum tensile strength is 21 N for the harpoon and 32 N for the suture. Harpoons broke in 6 % and threads in 22 % of cases. Harpoons detached owing to ligament rupture in 64 % of the cases. Regarding failures of the whole complex, the failure involves the harpoon in 92 % of cases and the thread in 56 %. The four possible placements of the harpoon in the promontory were equally safe in terms of resistance to traction. The PEEK harpoon can be easily anchored in the promontory. Thread is more resistant to traction than the harpoon, but the latter makes the surgical technique easier. Any of the four locations tested is feasible for anchoring the device.", "title": "" }, { "docid": "5320ff5b9e2a3d0d206bb74ed0e047cd", "text": "To the Editor: How do Shai et al. (July 17 issue)1 explain why the subjects in their study regained weight between month 6 and month 24, despite a reported reduction of 300 to 600 calories per day? Contributing possibilities may include the notion that a food-frequency questionnaire cannot precisely determine energy or macronutrient intake but, rather, ascertains general dietary patterns. Certain populations may underreport intake2,3 and have a decreased metabolic rate. The authors did not measure body composition, which is critical for documenting weight-loss components. In addition, the titles of the diets that are described in the article are misleading. Labeling the “low-carbohydrate” diet as such is questionable, since 40 to 42% of calories were from carbohydrates from month 6 to month 24, and data regarding ketosis support this view. Participants in the low-fat and Mediterranean-diet groups consumed between 30% and 33% of calories from fat and did not increase fiber consumption, highlighting the importance of diet quality. Furthermore, the authors should have provided baseline values and P values for within-group changes from baseline (see Table 2 of the article). Contrary to the authors’ assertion, it is not surprising that the effects on many biomarkers were minimal, since the dietary changes were minimal. The absence of biologically significant weight loss (2 to 4% after 2 years) highlights the fact that energy restriction and weight loss in themselves may minimally affect metabolic outcomes and that lifestyle changes must incorporate physical activity to optimize the reduction in the risk of chronic disease.4,5 Christian K. Roberts, Ph.D. R. James Barnard, Ph.D. Daniel M. Croymans, B.S.", "title": "" }, { "docid": "39321bc85746dc43736a0435c939c7da", "text": "We use recent network calculus results to study some properties of lossless multiplexing as it may be used in guaranteed service networks. We call network calculus a set of results that apply min-plus algebra to packet networks. We provide a simple proof that shaping a traffic stream to conform to a burstiness constraint preserves the original constraints satisfied by the traffic stream We show how all rate-based packet schedulers can be modeled with a simple rate latency service curve. Then we define a general form of deterministic effective bandwidth and equivalent capacity. We find that call acceptance regions based on deterministic criteria (loss or delay) are convex, in contrast to statistical cases where it is the complement of the region which is convex. We thus find that, in general, the limit of the call acceptance region based on statistical multiplexing when the loss probability target tends to 0 may be strictly larger than the call acceptance region based on lossless multiplexing. Finally, we consider the problem of determining the optimal parameters of a variable bit rate (VBR) connection when it is used as a trunk, or tunnel, given that the input traffic is known. We find that there is an optimal peak rate for the VBR trunk, essentially insensitive to the optimization criteria. For a linear cost function, we find an explicit algorithm for the optimal remaining parameters of the VBR trunk.", "title": "" }, { "docid": "4457aa3443d756a4afeb76f0571d3e25", "text": "THE AMOUNT OF DATA BEING DIGITALLY COLLECTED AND stored is vast and expanding rapidly. As a result, the science of data management and analysis is also advancing to enable organizations to convert this vast resource into information and knowledge that helps them achieve their objectives. Computer scientists have invented the term big data to describe this evolving technology. Big data has been successfully used in astronomy (eg, the Sloan Digital Sky Survey of telescopic information), retail sales (eg, Walmart’s expansive number of transactions), search engines (eg, Google’s customization of individual searches based on previous web data), and politics (eg, a campaign’s focus of political advertisements on people most likely to support their candidate based on web searches). In this Viewpoint, we discuss the application of big data to health care, using an economic framework to highlight the opportunities it will offer and the roadblocks to implementation. We suggest that leveraging the collection of patient and practitioner data could be an important way to improve quality and efficiency of health care delivery. Widespread uptake of electronic health records (EHRs) has generated massive data sets. A survey by the American Hospital Association showed that adoption of EHRs has doubled from 2009 to 2011, partly a result of funding provided by the Health Information Technology for Economic and Clinical Health Act of 2009. Most EHRs now contain quantitative data (eg, laboratory values), qualitative data (eg, text-based documents and demographics), and transactional data (eg, a record of medication delivery). However, much of this rich data set is currently perceived as a byproduct of health care delivery, rather than a central asset to improve its efficiency. The transition of data from refuse to riches has been key in the big data revolution of other industries. Advances in analytic techniques in the computer sciences, especially in machine learning, have been a major catalyst for dealing with these large information sets. These analytic techniques are in contrast to traditional statistical methods (derived from the social and physical sciences), which are largely not useful for analysis of unstructured data such as text-based documents that do not fit into relational tables. One estimate suggests that 80% of business-related data exist in an unstructured format. The same could probably be said for health care data, a large proportion of which is text-based. In contrast to most consumer service industries, medicine adopted a practice of generating evidence from experimental (randomized trials) and quasi-experimental studies to inform patients and clinicians. The evidence-based movement is founded on the belief that scientific inquiry is superior to expert opinion and testimonials. In this way, medicine was ahead of many other industries in terms of recognizing the value of data and information guiding rational decision making. However, health care has lagged in uptake of newer techniques to leverage the rich information contained in EHRs. There are 4 ways big data may advance the economic mission of health care delivery by improving quality and efficiency. First, big data may greatly expand the capacity to generate new knowledge. The cost of answering many clinical questions prospectively, and even retrospectively, by collecting structured data is prohibitive. Analyzing the unstructured data contained within EHRs using computational techniques (eg, natural language processing to extract medical concepts from free-text documents) permits finer data acquisition in an automated fashion. For instance, automated identification within EHRs using natural language processing was superior in detecting postoperative complications compared with patient safety indicators based on discharge coding. Big data offers the potential to create an observational evidence base for clinical questions that would otherwise not be possible and may be especially helpful with issues of generalizability. The latter issue limits the application of conclusions derived from randomized trials performed on a narrow spectrum of participants to patients who exhibit very different characteristics. Second, big data may help with knowledge dissemination. Most physicians struggle to stay current with the latest evidence guiding clinical practice. The digitization of medical literature has greatly improved access; however, the sheer", "title": "" }, { "docid": "4ba91dc010d3ecbdb39306e9f35f9612", "text": "Privacy aware anonymous trading for smart grid using digital currency has received very low attention so far. In this paper, we analyze the possibility of Bitcoin serving as the user friendly and effective privacy aware trading currency to facilitate energy exchange for smart grid.", "title": "" }, { "docid": "9a332d9ffe0e08cc688a8644de736202", "text": "Applications are increasingly using XML to represent semi-structured data and, consequently, a large amount of XML documents is available worldwide. As XML documents evolve over time, comparing XML documents to understand their evolution becomes fundamental. The main focus of existing research for comparing XML documents resides in identifying syntactic changes. However, a deeper notion of the change meaning is usually desired. This paper presents an inference-based XML evolution approach using Prolog to deal with this problem. Differently from existing XML diff approaches, our approach composes multiple syntactic changes, which usually have a common purpose, to infer semantic changes. We evaluated our approach through ten versions of an employment XML document. In this evaluation, we could observe that each new version introduced syntactic changes that could be summarized into semantic changes.", "title": "" }, { "docid": "fcf8649ff7c2972e6ef73f837a3d3f4d", "text": "The kitchen environment is one of the scenarios in the home where users can benefit from Ambient Assisted Living (AAL) applications. Moreover, it is the place where old people suffer from most domestic injuries. This paper presents a novel design, implementation and assessment of a Smart Kitchen which provides Ambient Assisted Living services; a smart environment that increases elderly and disabled people's autonomy in their kitchen-related activities through context and user awareness, appropriate user interaction and artificial intelligence. It is based on a modular architecture which integrates a wide variety of home technology (household appliances, sensors, user interfaces, etc.) and associated communication standards and media (power line, radio frequency, infrared and cabled). Its software architecture is based on the Open Services Gateway initiative (OSGi), which allows building a complex system composed of small modules, each one providing the specific functionalities required, and can be easily scaled to meet our needs. The system has been evaluated by a large number of real users (63) and carers (31) in two living labs in Spain and UK. Results show a large potential of system functionalities combined with good usability and physical, sensory and cognitive accessibility.", "title": "" }, { "docid": "e43eaf919d7bb920177c164c5eeddca2", "text": "In today's era AMBA (advanced microcontroller bus architecture) specifications have gone far beyond the Microcontrollers. In this paper, AMBA (Advanced Microcontroller Bus Architecture) ASB APB (Advanced system bus - Advanced Peripheral Bus) is implemented. The goal of the proposed paper is to synthesis, simulate complex interface between AMBA ASB and APB. The methodology adopted for the proposed paper is Verilog language with finite state machine models designed in ModelSim Version 10.3 and Xilinx-ISE design suite, version 13.4 is used to extract synthesis, design utilization summary and power reports. For the implementation APB Bridge, arbiter and decoder are designed. In AMBA ASB APB module, master gets into contact with APB bus. Arbiter determines master's status and priority and then, starts communicating with the bus. For selecting a bus slave, decoder uses the accurate address lines and an acknowledgement is given back to the bus master by the slave. An RTL view and an extracted design summary of AMBA ASB APB module at system on chip are shown in result section of the paper. Higher design complexities of SoCs architectures introduce the power consumption into picture. The various power components contribute in the power consumptions which are extracted by the power reports. So, power reports generate a better understanding of the power utilization to the designers. These are clocks total power which consumes of 0.66 mW, hierarchy total power which consumes of 1.05 mW, hierarchy total logical power which consumes of 0.30 mW and hierarchy total signal power which consumes of 0.74 mW powers in the proposed design. Graph is also plotted for clear understanding of the breakdown of powers.", "title": "" }, { "docid": "b56a6ce08cf00fefa1a1b303ebf21de9", "text": "Freesound is an online collaborative sound database where people with diverse interests share recorded sound samples under Creative Commons licenses. It was started in 2005 and it is being maintained to support diverse research projects and as a service to the overall research and artistic community. In this demo we want to introduce Freesound to the multimedia community and show its potential as a research resource. We begin by describing some general aspects of Freesound, its architecture and functionalities, and then explain potential usages that this framework has for research applications.", "title": "" }, { "docid": "370e1428067483a4a0871cedb5aef639", "text": "Interactive Game-Based Learning might be used to raise the awareness of students concerning questions of sustainability. Sustainability is a very complex topic. By interacting with a simulation game, students can get a more detailed and holistic conception of how sustainability can be achieved in everyday purchasing situations. The SuLi (Sustainable Living) game was developed to achieve this goal. In an evaluation study we found evidence that SuLi is an interesting alternative to more traditional approaches to learning. Nevertheless, there are still many open questions, as, e.g., whether one should combine simulation games with other forms of teaching and learning or how to design simulation games so that students really acquire detailed concepts of the domain.", "title": "" }, { "docid": "009f83c48787d956b8ee79c1d077d825", "text": "Learning salient representations of multiview data is an essential step in many applications such as image classification, retrieval, and annotation. Standard predictive methods, such as support vector machines, often directly use all the features available without taking into consideration the presence of distinct views and the resultant view dependencies, coherence, and complementarity that offer key insights to the semantics of the data, and are therefore offering weak performance and are incapable of supporting view-level analysis. This paper presents a statistical method to learn a predictive subspace representation underlying multiple views, leveraging both multiview dependencies and availability of supervising side-information. Our approach is based on a multiview latent subspace Markov network (MN) which fulfills a weak conditional independence assumption that multiview observations and response variables are conditionally independent given a set of latent variables. To learn the latent subspace MN, we develop a large-margin approach which jointly maximizes data likelihood and minimizes a prediction loss on training data. Learning and inference are efficiently done with a contrastive divergence method. Finally, we extensively evaluate the large-margin latent MN on real image and hotel review datasets for classification, regression, image annotation, and retrieval. Our results demonstrate that the large-margin approach can achieve significant improvements in terms of prediction performance and discovering predictive latent subspace representations.", "title": "" }, { "docid": "382ee4c7c870f9d05dee5546a664c553", "text": "Models based on the bivariate Poisson distribution are used for modelling sports data. Independent Poisson distributions are usually adopted to model the number of goals of two competing teams. We replace the independence assumption by considering a bivariate Poisson model and its extensions. The models proposed allow for correlation between the two scores, which is a plausible assumption in sports with two opposing teams competing against each other. The effect of introducing even slight correlation is discussed. Using just a bivariate Poisson distribution can improve model fit and prediction of the number of draws in football games.The model is extended by considering an inflation factor for diagonal terms in the bivariate joint distribution.This inflation improves in precision the estimation of draws and, at the same time, allows for overdispersed, relative to the simple Poisson distribution, marginal distributions. The properties of the models proposed as well as interpretation and estimation procedures are provided. An illustration of the models is presented by using data sets from football and water-polo.", "title": "" }, { "docid": "450b6ce3f24cbab0a7fb718a9d0e9bea", "text": "A new level shifter used in multiple voltage digital circuits is presented. It combines the merit of conventional level shifter and single supply level shifter, which can shifter any voltage level signal to a desired higher level with low leakage current. The circuits was designed in 180nm CMOS technology and simulated in SPICE. The simulation results showed that the proposed level shifter circuit has 36% leakage power dissipation reduction compared to the conventional level shifter", "title": "" }, { "docid": "2e6c8d94c988ec48ef3dccaf8a4ff7e7", "text": "We present a photometric stereo method for non-diffuse materials that does not require an explicit reflectance model or reference object. By computing a data-dependent rotation of RGB color space, we show that the specular reflection effects can be separated from the much simpler, diffuse (approximately Lambertian) reflection effects for surfaces that can be modeled with dichromatic reflectance. Images in this transformed color space are used to obtain photometric reconstructions that are independent of the specular reflectance. In contrast to other methods for highlight removal based on dichromatic color separation (e.g., color histogram analysis and/or polarization), we do not explicitly recover the specular and diffuse components of an image. Instead, we simply find a transformation of color space that yields more direct access to shape information. The method is purely local and is able to handle surfaces with arbitrary texture.", "title": "" }, { "docid": "a7287ea0f78500670fb32fc874968c54", "text": "Image captioning is a challenging task where the machine automatically describes an image by sentences or phrases. It often requires a large number of paired image-sentence annotations for training. However, a pre-trained captioning model can hardly be applied to a new domain in which some novel object categories exist, i.e., the objects and their description words are unseen during model training. To correctly caption the novel object, it requires professional human workers to annotate the images by sentences with the novel words. It is labor expensive and thus limits its usage in real-world applications. In this paper, we introduce the zero-shot novel object captioning task where the machine generates descriptions without extra training sentences about the novel object. To tackle the challenging problem, we propose a Decoupled Novel Object Captioner (DNOC) framework that can fully decouple the language sequence model from the object descriptions. DNOC has two components. 1) A Sequence Model with the Placeholder (SM-P) generates a sentence containing placeholders. The placeholder represents an unseen novel object. Thus, the sequence model can be decoupled from the novel object descriptions. 2) A key-value object memory built upon the freely available detection model, contains the visual information and the corresponding word for each object. A query generated from the SM-P is used to retrieve the words from the object memory. The placeholder will further be filled with the correct word, resulting in a caption with novel object descriptions. The experimental results on the held-out MSCOCO dataset demonstrate the ability of DNOC in describing novel concepts.", "title": "" }, { "docid": "e6b9a05ecc3fd48df50aa769ce05b6a6", "text": "This paper presents an interactive exoskeleton device for hand rehabilitation, iHandRehab, which aims to satisfy the essential requirements for both active and passive rehabilitation motions. iHandRehab is comprised of exoskeletons for the thumb and index finger. These exoskeletons are driven by distant actuation modules through a cable/sheath transmission mechanism. The exoskeleton for each finger has 4 degrees of freedom (DOF), providing independent control for all finger joints. The joint motion is accomplished by a parallelogram mechanism so that the joints of the device and their corresponding finger joints have the same angular displacement when they rotate. Thanks to this design, the joint angles can be measured by sensors real time and high level motion control is therefore made very simple without the need of complicated kinematics. The paper also discusses important issues when the device is used by different patients, including its adjustable joint range of motion (ROM) and adjustable range of phalanx length (ROPL). Experimentally collected data show that the achieved ROM is close to that of a healthy hand and the ROPL covers the size of a typical hand, satisfying the size need of regular hand rehabilitation. In order to evaluate the performance when it works as a haptic device in active mode, the equivalent moment of inertia (MOI) of the device is calculated. The results prove that the device has low inertia which is critical in order to obtain good backdrivability. Experimental analysis shows that the influence of friction accounts for a large portion of the driving torque and warrants future investigation.", "title": "" } ]
scidocsrr
61d234c9d20600bb9a261fc8d459233f
Factorization of Latent Variables in Distributional Semantic Models
[ { "docid": "ce55485a60213c7656eb804b89be36cc", "text": "In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word-word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors--namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)--that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study.", "title": "" } ]
[ { "docid": "06e8d9c53fe89fbf683920e90bf09731", "text": "Convolutional neural networks (CNNs) with their ability to learn useful spatial features have revolutionized computer vision. The network topology of CNNs exploits the spatial relationship among the pixels in an image and this is one of the reasons for their success. In other domains deep learning has been less successful because it is not clear how the structure of non-spatial data can constrain network topology. Here, we show how multivariate time series can be interpreted as space-time pictures, thus expanding the applicability of the tricks-of-the-trade for CNNs to this important domain. We demonstrate that our model beats more traditional state-of-the-art models at predicting price development on the European Power Exchange (EPEX). Furthermore, we find that the features discovered by CNNs on raw data beat the features that were hand-designed by an expert.", "title": "" }, { "docid": "e33080761e4ece057f455148c7329d5e", "text": "This paper compares the utilization of ConceptNet and WordNet in query expansion. Spreading activation selects candidate terms for query expansion from these two resources. Three measures including discrimination ability, concept diversity, and retrieval performance are used for comparisons. The topics and document collections in the ad hoc track of TREC-6, TREC-7 and TREC-8 are adopted in the experiments. The results show that ConceptNet and WordNet are complementary. Queries expanded with WordNet have higher discrimination ability. In contrast, queries expanded with ConceptNet have higher concept diversity. The performance of queries expanded by selecting the candidate terms from ConceptNet and WordNet outperforms that of queries without expansion, and queries expanded with a single resource.", "title": "" }, { "docid": "b5d2e42909bf8ce64beebe38630fcb47", "text": "In this paper we combine one method for hierarchical reinforcement learning—the options framework—with deep Q-networks (DQNs) through the use of different “option heads” on the policy network, and a supervisory network for choosing between the different options. We utilise our setup to investigate the effects of architectural constraints in subtasks with positive and negative transfer, across a range of network capacities. We empirically show that our augmented DQN has lower sample complexity when simultaneously learning subtasks with negative transfer, without degrading performance when learning subtasks with positive transfer.", "title": "" }, { "docid": "b34af4da147779c6d1505ff12cacd5aa", "text": "Crowd-enabled place-centric systems gather and reason over large mobile sensor datasets and target everyday user locations (such as stores, workplaces, and restaurants). Such systems are transforming various consumer services (for example, local search) and data-driven organizations (city planning). As the demand for these systems increases, our understanding of how to design and deploy successful crowdsensing systems must improve. In this paper, we present a systematic study of the coverage and scaling properties of place-centric crowdsensing. During a two-month deployment, we collected smartphone sensor data from 85 participants using a representative crowdsensing system that captures 48,000 different place visits. Our analysis of this dataset examines issues of core interest to place-centric crowdsensing, including place-temporal coverage, the relationship between the user population and coverage, privacy concerns, and the characterization of the collected data. Collectively, our findings provide valuable insights to guide the building of future place-centric crowdsensing systems and applications.", "title": "" }, { "docid": "e162fcb6b897e941cd26558f4ed16cd5", "text": "In this paper, we propose a novel real-valued time-delay neural network (RVTDNN) suitable for dynamic modeling of the baseband nonlinear behaviors of third-generation (3G) base-station power amplifiers (PA). Parameters (weights and biases) of the proposed model are identified using the back-propagation algorithm, which is applied to the input and output waveforms of the PA recorded under real operation conditions. Time- and frequency-domain simulation of a 90-W LDMOS PA output using this novel neural-network model exhibit a good agreement between the RVTDNN behavioral model's predicted results and measured ones along with a good generality. Moreover, dynamic AM/AM and AM/PM characteristics obtained using the proposed model demonstrated that the RVTDNN can track and account for the memory effects of the PAs well. These characteristics also point out that the small-signal response of the LDMOS PA is more affected by the memory effects than the PAs large-signal response when it is driven by 3G signals. This RVTDNN model requires a significantly reduced complexity and shorter processing time in the analysis and training procedures, when driven with complex modulated and highly varying envelope signals such as 3G signals, than previously published neural-network-based PA models.", "title": "" }, { "docid": "dbe8e36bd7d1323ab4da0e1a3213f62e", "text": "Problem: Parallels have been drawn between the rise of the internet in 1990s and the present rise of bitcoin (cryptocurrency) and underlying blockchain technology. This resulted in a widespread of media coverage due to extreme price fluctuations and increased supply and demand. Garcia et al. (2014) argues that this is driven by several social aspects including word-of-mouth communication on social media, indicating that this aspect of social media effects individual attitude formation and intention towards cryptocurrency. However, this combination of social media of antecedent of consumer acceptance is limited explored, especially in the context of technology acceptance. Purpose: The purpose of this thesis is to create further understanding in the Technology Acceptance Model with the additional construct: social influence, first suggested by Malhotra et al. (1999). Hereby, the additional construct of social media influence was added to advance the indirect effects of social media influence on attitude formation and behavioural intention towards cryptocurrency, through the processes of social influence (internalization; identification; compliance) by Kelman. Method: This study carries out a quantitative study where survey-research was used that included a total sample of 250 cases. This sample consists of individuals between 18-37 years old, where social media usage is part of the life. As a result of the data collection, analysis was conducted using multiple regression techniques. Conclusion: Analysis of the findings established theoretical validation of the appliance of the Technology Acceptance Model on digital innovation, like cryptocurrency. By adding the construct of social media, further understanding is created in the behaviour of millennials towards cryptocurrency. The evidence suggests that there are clear indirect effects of social media on attitude formation and intention towards engaging in cryptocurrency through the processes of social influence. This study should be seen as preliminary, where future research could be built upon. More specifically, in terms of consumer acceptance of cryptocurrency and the extent of influence by social media.", "title": "" }, { "docid": "e688dea8ba92a92f4d459c8c33f313e1", "text": "Since the first description of Seasonal Affective Disorder (SAD) by Rosenthal et al. in the 1980s, treatment with daily administration of light, or Bright Light Therapy (BLT), has been proven effective and is now recognized as a first-line therapeutic modality. More recently, studies aimed at understanding the pathophysiology of SAD and the mechanism of action of BLT have implicated shifts in the circadian rhythm and alterations in serotonin reuptake. BLT has also been increasingly used as an experimental treatment in non-seasonal unipolar and bipolar depression and other psychiatric disorders with known or suspected alterations in the circadian system. This review will discuss the history of SAD and BLT, the proposed pathophysiology of SAD and mechanisms of action of BLT in the treatment of SAD, and evidence supporting the efficacy of BLT in the treatment of non-seasonal unipolar major depression, bipolar depression, eating disorders, and ADHD.", "title": "" }, { "docid": "78437d8aafd3bf09522993447b0a4d50", "text": "Over the past 30 years, policy makers and professionals who provide services to older adults with chronic conditions and impairments have placed greater emphasis on conceptualizing aging in place as an attainable and worthwhile goal. Little is known, however, of the changes in how this concept has evolved in aging research. To track trends in aging in place, we examined scholarly articles published from 1980 to 2010 that included the concept in eleven academic gerontology journals. We report an increase in the absolute number and proportion of aging-in-place manuscripts published during this period, with marked growth in the 2000s. Topics related to the environment and services were the most commonly examined during 2000-2010 (35% and 31%, resp.), with a substantial increase in manuscripts pertaining to technology and health/functioning. This underscores the increase in diversity of topics that surround the concept of aging-in-place literature in gerontological research.", "title": "" }, { "docid": "1569bcea0c166d9bf2526789514609c5", "text": "In this paper, we present the developmert and initial validation of a new self-report instrument, the Differentiation of Self Inventory (DSI). T. DSI represents the first attempt to create a multi-dimensional measure of differentiation based on Bowen Theory, focusing specifically on adults (ages 25 +), their current significant relationships, and their relations with families of origin. Principal components factor analysis on a sample of 313 normal adults (mean age = 36.8) suggested four dimensions: Emotional Reactivity, Reactive Distancing, Fusion with Parents, and \"I\" Position. Scales constructed from these factors were found to be moderately correlated in the expected direction, internally consistent, and significantly predictive of trait anxiety. The potential contribution of the DSI is discussed -for testing Bowen Theory, as a clinical assessment tool, and as an indicator of psychotherapeutic outcome.", "title": "" }, { "docid": "4729691ffa6e252187a1a663e85fde8b", "text": "Language models are used in automatic transcription system to resolve ambiguities. This is done by limiting the vocabulary of words that can be recognized as well as estimating the n-gram probability of the words in the given text. In the context of historical documents, a non-unified spelling and the limited amount of written text pose a substantial problem for the selection of the recognizable vocabulary as well as the computation of the word probabilities. In this paper we propose for the transcription of historical Spanish text to keep the corpus for the n-gram limited to a sample of the target text, but expand the vocabulary with words gathered from external resources. We analyze the performance of such a transcription system with different sizes of external vocabularies and demonstrate the applicability and the significant increase in recognition accuracy of using up to 300 thousand external words.", "title": "" }, { "docid": "459532e59eba5231d95cf74754b9d8ff", "text": "Major policies, regulations, and practice patterns related to interventional pain management are dependent on Medicare policies which include national coverage policies - national coverage determinations (NCDs), and local coverage policies - local coverage determinations (LCDs). The NCDs are Medicare coverage policies issued by the Centers for Medicare and Medicaid Services (CMS). The process used by the CMS in deciding what is and what is not medically necessary is lengthy, involving a review of evidence-based literature on the subject, expert opinion, and public comments. In contrast, LCDs are rules and Medicare coverage that are issued by regional contractors and fiscal intermediaries when an NCD has not addressed the policy at issue. The evidence utilized in preparing LCDs includes the highest level of evidence which is based on published authoritative evidence derived from definitive randomized clinical trials or other definitive studies, and general acceptance by the medical community (standard of practice), as supported by sound medical evidence. In addition, the intervention must be safe and effective and appropriate including duration and frequency that is considered appropriate for the item or service in terms of whether it is furnished in accordance with accepted standards of medical practice for the diagnosis or treatment of the patient's condition or to improve the function. In addition, the safe and effective provision includes that service must be furnished in a setting appropriate to the patient's medical needs and condition, ordered and furnished by qualified personnel, the service must meet, but does not exceed, the patient's medical need, and be at least as beneficial as an existing and available medically appropriate alternative. The LCDs are prepared with literature review, state medical societies, and carrier advisory committees (CACs) of which interventional pain management is a member. The LCDs may be appealed by beneficiaries. The NCDs are prepared by the CMS following a request for a national coverage decision after an appropriate national coverage request along with a draft decision memorandum, and public comments. After the request, the staff review, external technology assessment, Medicare Evidence Development and Coverage Advisory Committee (MedCAC) assessment, public comments, a draft decision memorandum may be posted which will be followed by a final decision and implementation instructions. This decision may be appealed to the department appeals board, but may be difficult to reverse. This manuscript describes NCDs and LCDs and the process of development, their development, issues related to the development, and finally their relation to interventional pain management.", "title": "" }, { "docid": "2f7a15b3d922d9a1d03a6851be5f6622", "text": "The clinical relevance of T cells in the control of a diverse set of human cancers is now beyond doubt. However, the nature of the antigens that allow the immune system to distinguish cancer cells from noncancer cells has long remained obscure. Recent technological innovations have made it possible to dissect the immune response to patient-specific neoantigens that arise as a consequence of tumor-specific mutations, and emerging data suggest that recognition of such neoantigens is a major factor in the activity of clinical immunotherapies. These observations indicate that neoantigen load may form a biomarker in cancer immunotherapy and provide an incentive for the development of novel therapeutic approaches that selectively enhance T cell reactivity against this class of antigens.", "title": "" }, { "docid": "7875910ad044232b4631ecacfec65656", "text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0bd981ea6d38817b560383f48fdfb729", "text": "Lightweight wheelchairs are characterized by their low cost and limited range of adjustment. Our study evaluated three different folding lightweight wheelchair models using the American National Standards Institute/Rehabilitation Engineering Society of North America (ANSI/RESNA) standards to see whether quality had improved since the previous data were reported. On the basis of reports of increasing breakdown rates in the community, we hypothesized that the quality of these wheelchairs had declined. Seven of the nine wheelchairs tested failed to pass the multidrum test durability requirements. An average of 194,502 +/- 172,668 equivalent cycles was completed, which is similar to the previous test results and far below the 400,000 minimum required to pass the ANSI/RESNA requirements. This was also significantly worse than the test results for aluminum ultralight folding wheelchairs. Overall, our results uncovered some disturbing issues with these wheelchairs and suggest that manufacturers should put more effort into this category to improve quality. To improve the durability of lightweight wheelchairs, we suggested that stronger regulations be developed that require wheelchairs to be tested by independent and certified test laboratories. We also proposed a wheelchair rating system based on the National Highway Transportation Safety Administration vehicle crash ratings to assist clinicians and end users when comparing the durability of different wheelchairs.", "title": "" }, { "docid": "7a337f2a2fcf6c5e0990aec419e63208", "text": "Asynchronous event-based sensors present new challenges in basic robot vision problems like feature tracking. The few existing approaches rely on grouping events into models and computing optical flow after assigning future events to those models. Such a hard commitment in data association attenuates the optical flow quality and causes shorter flow tracks. In this paper, we introduce a novel soft data association modeled with probabilities. The association probabilities are computed in an intertwined EM scheme with the optical flow computation that maximizes the expectation (marginalization) over all associations. In addition, to enable longer tracks we compute the affine deformation with respect to the initial point and use the resulting residual as a measure of persistence. The computed optical flow enables a varying temporal integration different for every feature and sized inversely proportional to the length of the flow. We show results in egomotion and very fast vehicle sequences and we show the superiority over standard frame-based cameras.", "title": "" }, { "docid": "d3df310f37045f4e85235623d7539ba4", "text": "The aim of this paper is to review the available literature on goal scoring in elite male football leagues. A systematic search of two electronic databases (SPORTDiscus with Full Text and ISI Web Knowledge All Databases) was conducted and of the 610 studies initially identified, 19 were fully analysed. Studies that fitted all the inclusion criteria were organised according to the research approach adopted (static or dynamic). The majority of these studies were conducted in accordance with the static approach (n=15), where the data were collected without considering dynamic of performance during matches and were analysed using standard statistical methods for data analysis. They focused predominantly on a description of key performance indicators (technical and tactical). Meanwhile, in a few studies the dynamic approach (n=4) was adopted, where performance variables were recorded taking into account the chronological and sequential order in which they occurred. Different advanced analysis techniques for assessing performance evolution over time during the match were used in this second group of studies. The strengths and limitations of both approaches in terms of providing the meaningful information for coaches are discussed in the present study.", "title": "" }, { "docid": "9d175a211ec3b0ee7db667d39c240e1c", "text": "In recent years, there has been an increased effort to introduce coding and computational thinking in early childhood education. In accordance with the international trend, programming has become an increasingly growing focus in European education. With over 9.5 million iOS downloads, ScratchJr is the most popular freely available introductory programming language for young children (ages 5-7). This paper provides an overview of ScratchJr, and the powerful ideas from computer science it is designed to teach. In addition, data analytics are presented to show trends of usage in Europe and and how it compares to the rest of the world. Data reveals that countries with robust computer science initiatives such as the UK and the Nordic countries have high usage of ScratchJr.", "title": "" }, { "docid": "0bc3c8e96d465f5dd6649e3b3ee6880e", "text": "Intelligent systems, which are on their way to becoming mainstream in everyday products, make recommendations and decisions for users based on complex computations. Researchers and policy makers increasingly raise concerns regarding the lack of transparency and comprehensibility of these computations from the user perspective. Our aim is to advance existing UI guidelines for more transparency in complex real-world design scenarios involving multiple stakeholders. To this end, we contribute a stage-based participatory process for designing transparent interfaces incorporating perspectives of users, designers, and providers, which we developed and validated with a commercial intelligent fitness coach. With our work, we hope to provide guidance to practitioners and to pave the way for a pragmatic approach to transparency in intelligent systems.", "title": "" }, { "docid": "25f0871346c370db4b26aecd08a9d75e", "text": "This review presents a comprehensive discussion of the key technical issues in woody biomass pretreatment: barriers to efficient cellulose saccharification, pretreatment energy consumption, in particular energy consumed for wood-size reduction, and criteria to evaluate the performance of a pretreatment. A post-chemical pretreatment size-reduction approach is proposed to significantly reduce mechanical energy consumption. Because the ultimate goal of biofuel production is net energy output, a concept of pretreatment energy efficiency (kg/MJ) based on the total sugar recovery (kg/kg wood) divided by the energy consumption in pretreatment (MJ/kg wood) is defined. It is then used to evaluate the performances of three of the most promising pretreatment technologies: steam explosion, organosolv, and sulfite pretreatment to overcome lignocelluloses recalcitrance (SPORL) for softwood pretreatment. The present study found that SPORL is the most efficient process and produced highest sugar yield. Other important issues, such as the effects of lignin on substrate saccharification and the effects of pretreatment on high-value lignin utilization in woody biomass pretreatment, are also discussed.", "title": "" }, { "docid": "d434cfa2ee51d4a0f4fa4bf43e1d4d60", "text": "A microstrip-fed printed bow-tie antenna is presented in order to achieve wide bandwidth, high gain, and size reduction. A comparison between the bow-tie and the quasi-Yagi (dipole and director) antennas shows that the bow-tie antenna has a wider bandwidth, higher gain, lower front-to-back ratio, lower cross-polarization level, and smaller size. Two-element arrays are designed and their characteristics are compared. The bow-tie antenna yields lower coupling for the same distance between elements. © 2004 Wiley Periodicals, Inc. Microwave Opt Technol Lett 43: 123–126, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.20396", "title": "" } ]
scidocsrr
67dcfb57c9cec5070b2051baed4a7d0e
Multiple-food recognition considering co-occurrence employing manifold ranking
[ { "docid": "b6dcf2064ad7f06fd1672b1348d92737", "text": "In this paper, we propose a two-step method to recognize multiple-food images by detecting candidate regions with several methods and classifying them with various kinds of features. In the first step, we detect several candidate regions by fusing outputs of several region detectors including Felzenszwalb's deformable part model (DPM) [1], a circle detector and the JSEG region segmentation. In the second step, we apply a feature-fusion-based food recognition method for bounding boxes of the candidate regions with various kinds of visual features including bag-of-features of SIFT and CSIFT with spatial pyramid (SP-BoF), histogram of oriented gradient (HoG), and Gabor texture features. In the experiments, we estimated ten food candidates for multiple-food images in the descending order of the confidence scores. As results, we have achieved the 55.8% classification rate, which improved the baseline result in case of using only DPM by 14.3 points, for a multiple-food image data set. This demonstrates that the proposed two-step method is effective for recognition of multiple-food images.", "title": "" }, { "docid": "ef99799bf977ba69a63c9f030fc65c7f", "text": "In this paper, we propose a novel transductive learning framework named manifold-ranking based image retrieval (MRBIR). Given a query image, MRBIR first makes use of a manifold ranking algorithm to explore the relationship among all the data points in the feature space, and then measures relevance between the query and all the images in the database accordingly, which is different from traditional similarity metrics based on pair-wise distance. In relevance feedback, if only positive examples are available, they are added to the query set to improve the retrieval result; if examples of both labels can be obtained, MRBIR discriminately spreads the ranking scores of positive and negative examples, considering the asymmetry between these two types of images. Furthermore, three active learning methods are incorporated into MRBIR, which select images in each round of relevance feedback according to different principles, aiming to maximally improve the ranking result. Experimental results on a general-purpose image database show that MRBIR attains a significant improvement over existing systems from all aspects.", "title": "" }, { "docid": "d6d9cb649294de96ea2bfe18753559df", "text": "Since health care on foods is drawing people's attention recently, a system that can record everyday meals easily is being awaited. In this paper, we propose an automatic food image recognition system for recording people's eating habits. In the proposed system, we use the Multiple Kernel Learning (MKL) method to integrate several kinds of image features such as color, texture and SIFT adaptively. MKL enables to estimate optimal weights to combine image features for each category. In addition, we implemented a prototype system to recognize food images taken by cellular-phone cameras. In the experiment, we have achieved the 61.34% classification rate for 50 kinds of foods. To the best of our knowledge, this is the first report of a food image classification system which can be applied for practical use.", "title": "" }, { "docid": "dce51c1fed063c9d9776fce998209d25", "text": "While classical kernel-based learning algorithms are based on a single kernel, in practice it is often desirable to use multiple kernels. Lankriet et al. (2004) considered conic combinations of kernel matrices for classification, leading to a convex quadratically constrained quadratic program. We show that it can be rewritten as a semi-infinite linear program that can be efficiently solved by recycling the standard SVM implementations. Moreover, we generalize the formulation and our method to a larger class of problems, including regression and one-class classification. Experimental results show that the proposed algorithm works for hundred thousands of examples or hundreds of kernels to be combined, and helps for automatic model selection, improving the interpretability of the learning result. In a second part we discuss general speed up mechanism for SVMs, especially when used with sparse feature maps as appear for string kernels, allowing us to train a string kernel SVM on a 10 million real-world splice dataset from computational biology. We integrated Multiple Kernel Learning in our Machine Learning toolbox SHOGUN for which the source code is publicly available at http://www.fml.tuebingen.mpg.de/raetsch/projects/shogun.", "title": "" } ]
[ { "docid": "1aa036b8f6ca4c2dfaa02b765fa3f89d", "text": "Excerpt] Our primary aims in this effort are twofold: to clarify the independent theoretical contributions of institutional theory to analyses of organizations, and to develop this theoretical perspective further in order to enhance its use in empirical research. There is also a more general, more ambitious objective here, and that is to build a bridge between two distinct models of social actor that underlie most organizational analyses, which we refer to as a rational actor model and an institutional model. The former is premised on the assumption that individuals are constantly engaged in calculations of the costs and benefits of different action choices, and that behavior reflects such utility-maximizing calculations. In the latter model, by contrast, 'oversocialized' individuals are assumed to accept and follow social norms unquestioningly, without any real reflection or behavioral resistance based on their own particular, personal interests. We suggest that these two general models should be treated not as oppositional but rather as representing two ends of a continuum of decisionmaking processes and behaviors. Thus, a key problem for theory and research is to specify the conditions under which behavior is more likely to resemble one end of this continuum or the other. In short, what is needed are theories of when rationality is likely to be more or less bounded. A developed conception of institutionalization processes provides a useful point of departure for exploring this issue.", "title": "" }, { "docid": "d2f19725a400829650ac6389373f3c0e", "text": "\"Is there a biology of intelligence which is characteristic of the normal human nervous system?\" Here we review 37 modern neuroimaging studies in an attempt to address this question posed by Halstead (1947) as he and other icons of the last century endeavored to understand how brain and behavior are linked through the expression of intelligence and reason. Reviewing studies from functional (i.e., functional magnetic resonance imaging, positron emission tomography) and structural (i.e., magnetic resonance spectroscopy, diffusion tensor imaging, voxel-based morphometry) neuroimaging paradigms, we report a striking consensus suggesting that variations in a distributed network predict individual differences found on intelligence and reasoning tasks. We describe this network as the Parieto-Frontal Integration Theory (P-FIT). The P-FIT model includes, by Brodmann areas (BAs): the dorsolateral prefrontal cortex (BAs 6, 9, 10, 45, 46, 47), the inferior (BAs 39, 40) and superior (BA 7) parietal lobule, the anterior cingulate (BA 32), and regions within the temporal (BAs 21, 37) and occipital (BAs 18, 19) lobes. White matter regions (i.e., arcuate fasciculus) are also implicated. The P-FIT is examined in light of findings from human lesion studies, including missile wounds, frontal lobotomy/leukotomy, temporal lobectomy, and lesions resulting in damage to the language network (e.g., aphasia), as well as findings from imaging research identifying brain regions under significant genetic control. Overall, we conclude that modern neuroimaging techniques are beginning to articulate a biology of intelligence. We propose that the P-FIT provides a parsimonious account for many of the empirical observations, to date, which relate individual differences in intelligence test scores to variations in brain structure and function. Moreover, the model provides a framework for testing new hypotheses in future experimental designs.", "title": "" }, { "docid": "42faf2c0053c9f6a0147fc66c8e4c122", "text": "IN 1921, Gottlieb's discovery of the epithelial attachment of the gingiva opened new horizons which served as the basis for a better understanding of the biology of the dental supporting tissues in health and disease. Three years later his pupils, Orban and Kohler (1924), undertook the task of measuring the epithelial attachment as well as the surrounding tissue relations during the four phases of passive eruption of the tooth. Gottlieb and Orban's descriptions of the epithelial attachment unveiled the exact morphology of this epithelial structure, and clarified the relation of this", "title": "" }, { "docid": "e1b39e972eff71eb44b39f37e7a7b2f3", "text": "The maximum mean discrepancy (MMD) is a recently proposed test statistic for the two-sample test. Its quadratic time complexity, however, greatly hampers its availability to large-scale applications. To accelerate the MMD calculation, in this study we propose an efficient method called FastMMD. The core idea of FastMMD is to equivalently transform the MMD with shift-invariant kernels into the amplitude expectation of a linear combination of sinusoid components based on Bochner’s theorem and Fourier transform (Rahimi & Recht, 2007). Taking advantage of sampling the Fourier transform, FastMMD decreases the time complexity for MMD calculation from to , where N and d are the size and dimension of the sample set, respectively. Here, L is the number of basis functions for approximating kernels that determines the approximation accuracy. For kernels that are spherically invariant, the computation can be further accelerated to by using the Fastfood technique (Le, Sarlós, & Smola, 2013). The uniform convergence of our method has also been theoretically proved in both unbiased and biased estimates. We also provide a geometric explanation for our method, ensemble of circular discrepancy, which helps us understand the insight of MMD and we hope will lead to more extensive metrics for assessing the two-sample test task. Experimental results substantiate that the accuracy of FastMMD is similar to that of MMD and with faster computation and lower variance than existing MMD approximation methods.", "title": "" }, { "docid": "f6592e6495527a8e8df9bede4e983e12", "text": "All Internet facing systems and applications carry security risks. Security professionals across the globe generally address these security risks by Vulnerability Assessment and Penetration Testing (VAPT). The VAPT is an offensive way of defending the cyber assets of an organization. It consists of two major parts, namely Vulnerability Assessment (VA) and Penetration Testing (PT). Vulnerability assessment, includes the use of various automated tools and manual testing techniques to determine the security posture of the target system. In this step all the breach points and loopholes are found. These breach points/loopholes if found by an attacker can lead to heavy data loss and fraudulent intrusion activities. In Penetration testing the tester simulates the activities of a malicious attacker who tries to exploit the vulnerabilities of the target system. In this step the identified set of vulnerabilities in VA is used as input vector. This process of VAPT helps in assessing the effectiveness of the security measures that are present on the target system. In this paper we have described the entire process of VAPT, along with all the methodologies, models and standards. A shortlisted set of efficient and popular open source/free tools which are useful in conducting VAPT and the required list of precautions is given. A case study of a VAPT test conducted on a bank system using the shortlisted tools is also discussed.", "title": "" }, { "docid": "70ec2398526863c05b41866593214d0a", "text": "Matrix factorization (MF) is one of the most popular techniques for product recommendation, but is known to suffer from serious cold-start problems. Item cold-start problems are particularly acute in settings such as Tweet recommendation where new items arrive continuously. In this paper, we present a meta-learning strategy to address item cold-start when new items arrive continuously. We propose two deep neural network architectures that implement our meta-learning strategy. The first architecture learns a linear classifier whose weights are determined by the item history while the second architecture learns a neural network whose biases are instead adjusted. We evaluate our techniques on the real-world problem of Tweet recommendation. On production data at Twitter, we demonstrate that our proposed techniques significantly beat the MF baseline and also outperform production models for Tweet recommendation.", "title": "" }, { "docid": "ad327b34d34887ae6380cbb07b7748bb", "text": "IEEE 802.15.4 is the de facto standard for Wireless Sensor Networks (WSNs) that outlines the specifications of the PHY layer and MAC sub-layer in these networks. The MAC protocol is needed to orchestrate sensor nodes access to the wireless communication medium. Although distinguished by a set of strengths that contributed to its popularity in various WSNs, IEEE 802.15.4 MAC suffers from several limitations that play a role in deteriorating its performance. Also, from a practical perspective, 80.15.4-based networks are usually deployed in the vicinity of other wireless networks that operate in the same ISM band. This means that 802.15.4 MAC should be ready to cope with interference from other networks. These facts have motivated efforts to devise improved IEEE 802.15.4 MAC protocols for WSNs. In this paper we provide a survey for these protocols and highlight the methodologies they follow to enhance the performance of the IEEE 802.15.4 MAC protocol.", "title": "" }, { "docid": "99b5e24ed06352ab52d31165682248db", "text": "In recent years, the study of radiation pattern reconfigurable antennas has made great progress. Radiation pattern reconfigurable antennas have more advantages and better prospects compared with conventional antennas. They can be used to avoid noisy environments, maneuver away from electronic jamming, improve system gain and security, save energy by directing signals only towards the intended direction, and increase the number of subscribers by having a broad pattern in the wireless communication system. The latest researches of the radiation pattern reconfigurable antennas are analyzed and summarized in this paper to present the characteristics and classification. The trend of radiation pattern reconfigurable antennas' development is given at the end of the paper.", "title": "" }, { "docid": "13eaa316c8e41a9cc3807d60ba72db66", "text": "This is a short paper introducing pitfalls when implementing averaged scores. Although, it is common to compute averaged scores, it is good to specify in detail how the scores are computed.", "title": "" }, { "docid": "e99343a0ab1eb9007df4610ae35dec97", "text": "Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL). Although SRL is naturally essential to text comprehension tasks, it is surprisingly ignored in previous work. This paper thus makes the first attempt to let SRL enhance text comprehension and inference through specifying verbal arguments and their corresponding semantic roles. In terms of deep learning models, our embeddings are enhanced by semantic role labels for more fine-grained semantics. We show that the salient labels can be conveniently added to existing models and significantly improve deep learning models in challenging text comprehension tasks. Extensive experiments on benchmark machine reading comprehension and inference datasets verify that the proposed semantic learning helps our system reach new state-of-the-art.", "title": "" }, { "docid": "ff1ca70f2ec75667d16479d7d09705de", "text": "This paper proposes the application of the asymmetrical duty cycle to the three-phase dc/dc pulse-width modulation isolated converter. Thus, soft commutation is achieved for a wide load range using the leakage inductance of the transformer and the intrinsic capacitance of the switches, as no additional semiconductor devices are needed. The resulting topology is characterized by an increase in the input current and output current frequency, by a factor of three compared to the full-bridge converter, which reduces the filters size. In addition, the rms current through the power components is lower, implying the improved thermal distribution of the losses. Besides, the three-phase transformer allows the reduction of the core size. In this paper, a mathematical analysis, the main waveforms, a design procedure, as well as simulation and experimental results obtained in a prototype of 6 kW are presented.", "title": "" }, { "docid": "d1a94ed95234d9ea660b6e4779a6a694", "text": "This study aims to analyse the scientific literature on sustainability and innovation in the automotive sector in the last 13 years. The research is classified as descriptive and exploratory. The process presented 31 articles in line with the research topic in the Scopus database. The bibliometric analysis identified the most relevant articles, authors, keywords, countries, research centers and journals for the subject from 2004 to 2016 in the Industrial Engineering domain. We concluded, through the systemic analysis, that the automotive sector is well structured on the issue of sustainability and process innovation. Innovations in the sector are of the incremental process type, due to the lower risk, lower costs and less complexity. However, the literature also points out that radical innovations are needed in order to fit the prevailing environmental standards. The selected studies show that environmental practices employed in the automotive sector are: the minimization of greenhouse gas emissions, life-cycle assessment, cleaner production, reverse logistics and eco-innovation. Thus, it displays the need for empirical studies in automotive companies on the environmental practices employed and how these practices impact innovation.", "title": "" }, { "docid": "711b3ed2cb9da33199dcc18f8b3fc98d", "text": "In this paper, we propose two ways of improving image classification based on bag-of-words representation [25]. Two shortcomings of this representation are the loss of the spatial information of visual words and the presence of noisy visual words due to the coarseness of the vocabulary building process. On the one hand, we propose a new representation of images that goes further in the analogy with textual data: visual sentences, that allows us to \"read\" visual words in a certain order, as in the case of text. We can therefore consider simple spatial relations between words. We also present a new image classification scheme that exploits these relations. It is based on the use of language models, a very popular tool from speech and text analysis communities. On the other hand, we propose new techniques to eliminate useless words, one based on geometric properties of the keypoints, the other on the use of probabilistic Latent Semantic Analysis (pLSA). Experiments show that our techniques can significantly improve image classification, compared to a classical Support Vector Machine-based classification.", "title": "" }, { "docid": "1e6310e8b16625e8f8319c7386723e55", "text": "Exploiting memory disclosure vulnerabilities like the HeartBleed bug may cause arbitrary reading of a victim's memory, leading to leakage of critical secrets such as crypto keys, personal identity and financial information. While isolating code that manipulates critical secrets into an isolated execution environment is a promising countermeasure, existing approaches are either too coarse-grained to prevent intra-domain attacks, or require excessive intervention from low-level software (e.g., hypervisor or OS), or both. Further, few of them are applicable to large-scale software with millions of lines of code. This paper describes a new approach, namely SeCage, which retrofits commodity hardware virtualization extensions to support efficient isolation of sensitive code manipulating critical secrets from the remaining code. SeCage is designed to work under a strong adversary model where a victim application or even the OS may be controlled by the adversary, while supporting large-scale software with small deployment cost. SeCage combines static and dynamic analysis to decompose monolithic software into several compart- ments, each of which may contain different secrets and their corresponding code. Following the idea of separating control and data plane, SeCage retrofits the VMFUNC mechanism and nested paging in Intel processors to transparently provide different memory views for different compartments, while allowing low-cost and transparent invocation across domains without hypervisor intervention.\n We have implemented SeCage in KVM on a commodity Intel machine. To demonstrate the effectiveness of SeCage, we deploy it to the Nginx and OpenSSH server with the OpenSSL library as well as CryptoLoop with small efforts. Security evaluation shows that SeCage can prevent the disclosure of private keys from HeartBleed attacks and memory scanning from rootkits. The evaluation shows that SeCage only incurs small performance and space overhead.", "title": "" }, { "docid": "4f7fdd852f520f6928eeb69b3d0d1632", "text": "Hadoop MapReduce is a popular framework for distributed storage and processing of large datasets and is used for big data analytics. It has various configuration parameters which play an important role in deciding the performance i.e., the execution time of a given big data processing job. Default values of these parameters do not result in good performance and therefore it is important to tune them. However, there is inherent difficulty in tuning the parameters due to two important reasons - first, the parameter search space is large and second, there are cross-parameter interactions. Hence, there is a need for a dimensionality-free method which can automatically tune the configuration parameters by taking into account the cross-parameter dependencies. In this paper, we propose a novel Hadoop parameter tuning methodology, based on a noisy gradient algorithm known as the simultaneous perturbation stochastic approximation (SPSA). The SPSA algorithm tunes the selected parameters by directly observing the performance of the Hadoop MapReduce system. The approach followed is independent of parameter dimensions and requires only 2 observations per iteration while tuning. We demonstrate the effectiveness of our methodology in achieving good performance on popular Hadoop benchmarks namely Grep, Bigram, Inverted Index, Word Co-occurrence and Terasort. Our method, when tested on a 25 node Hadoop cluster shows 45-66% decrease in execution time of Hadoop jobs on an average, when compared to prior methods. Further, our experiments also indicate that the parameters tuned by our method are resilient to changes in number of cluster nodes, which makes our method suitable to optimize Hadoop when it is provided as a service on the cloud.", "title": "" }, { "docid": "17d46377e67276ec3e416d6da4bb4965", "text": "There is an increasing trend of people leaving digital traces through social media. This reality opens new horizons for urban studies. With this kind of data, researchers and urban planners can detect many aspects of how people live in cities and can also suggest how to transform cities into more efficient and smarter places to live in. In particular, their digital trails can be used to investigate tastes of individuals, and what attracts them to live in a particular city or to spend their vacation there. In this paper we propose an unconventional way to study how people experience the city, using information from geotagged photographs that people take at different locations. We compare the spatial behavior of residents and tourists in 10 most photographed cities all around the world. The study was conducted on both a global and local level. On the global scale we analyze the 10 most photographed cities and measure how attractive each city is for people visiting it from other cities within the same country or from abroad. For the purpose of our analysis we construct the users’ mobility network and measure the strength of the links between each pair of cities as a level of attraction of people living in one city (i.e., origin) to the other city (i.e., destination). On the local level we study the spatial distribution of user activity and identify the photographed hotspots inside each city. The proposed methodology and the results of our study are a low cost mean to characterize touristic activity within a certain location and can help cities strengthening their touristic potential.", "title": "" }, { "docid": "30e0918ec670bdab298f4f5bb59c3612", "text": "Consider a single hard disk drive (HDD) composed of rotating platters and a single magnetic head. We propose a simple internal coding framework for HDDs that uses coding across drive blocks to reduce average block seek times. In particular, instead of the HDD controller seeking individual blocks, the drive performs coded-seeking: It seeks the closest subset of coded blocks, where a coded block contains partial information from multiple uncoded blocks. Coded-seeking is a tool that relaxes the scheduling of a full traveling salesman problem (TSP) on an HDD into a k-TSP. This may provide opportunities for new scheduling algorithms and to reduce average read times.", "title": "" }, { "docid": "b9879e6bdcc08250bde4a59c357062a8", "text": "Constructing datasets to analyse the progression of conflicts has been a longstanding objective of peace and conflict studies research. In essence, the problem is to reliably extract relevant text snippets and code (annotate) them using an ontology that is meaningful to social scientists. Such an ontology usually characterizes either types of violent events (killing, bombing, etc.), and/or the underlying drivers of conflict, themselves hierarchically structured, for example security, governance and economics, subdivided into conflict-specific indicators. Numerous coding approaches have been proposed in the social science literature, ranging from fully automated “machine” coding to human coding. Machine coding is highly error prone, especially for labelling complex drivers, and suffers from extraction of duplicated events, but human coding is expensive, and suffers from inconsistency between annotators; thus hybrid approaches are required. In this paper, we analyse experimentally how human input can most effectively be used in a hybrid system to complement machine coding. Using two newly created real-world datasets, we show that machine learning methods improve on rule-based automated coding for filtering large volumes of input, while human verification of relevant/irrelevant text leads to improved performance of machine learning for predicting multiple labels in the ontology.", "title": "" }, { "docid": "42d3f666325c3c9e2d61fcbad3c6659a", "text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.", "title": "" }, { "docid": "64e573006e2fb142dba1b757b1e4f20d", "text": "Online learning algorithms often have to operate in the presence of concept drift (i.e., the concepts to be learned can change with time). This paper presents a new categorization for concept drift, separating drifts according to different criteria into mutually exclusive and nonheterogeneous categories. Moreover, although ensembles of learning machines have been used to learn in the presence of concept drift, there has been no deep study of why they can be helpful for that and which of their features can contribute or not for that. As diversity is one of these features, we present a diversity analysis in the presence of different types of drifts. We show that, before the drift, ensembles with less diversity obtain lower test errors. On the other hand, it is a good strategy to maintain highly diverse ensembles to obtain lower test errors shortly after the drift independent on the type of drift, even though high diversity is more important for more severe drifts. Longer after the drift, high diversity becomes less important. Diversity by itself can help to reduce the initial increase in error caused by a drift, but does not provide the faster recovery from drifts in long-term.", "title": "" } ]
scidocsrr
2c3358357f46f2a5b9906993ff64a414
An efficient Hadoop data replication method design for heterogeneous clusters
[ { "docid": "e9aac361f8ca1bb8f10409859aef718d", "text": "MapReduce has become an important distributed processing model for large-scale data-intensive applications like data mining and web indexing. Hadoop-an open-source implementation of MapReduce is widely used for short jobs requiring low response time. The current Hadoop implementation assumes that computing nodes in a cluster are homogeneous in nature. Data locality has not been taken into account for launching speculative map tasks, because it is assumed that most maps are data-local. Unfortunately, both the homogeneity and data locality assumptions are not satisfied in virtualized data centers. We show that ignoring the data-locality issue in heterogeneous environments can noticeably reduce the MapReduce performance. In this paper, we address the problem of how to place data across nodes in a way that each node has a balanced data processing load. Given a dataintensive application running on a Hadoop MapReduce cluster, our data placement scheme adaptively balances the amount of data stored in each node to achieve improved data-processing performance. Experimental results on two real data-intensive applications show that our data placement strategy can always improve the MapReduce performance by rebalancing data across nodes before performing a data-intensive application in a heterogeneous Hadoop cluster.", "title": "" } ]
[ { "docid": "f3cda42986875ff2192d663da9e8a8d0", "text": "Available bandwidth estimation is useful for route selection in overlay networks, QoS verification, and traffic engineering. Recent years have seen a surge in interest in available bandwidth estimation. A few tools have been proposed and evaluated in simulation and over a limited number of Internet paths, but there is still great uncertainty in the performance of these tools over the Internet at large.This paper introduces Spruce, a simple, light-weight tool for measuring available bandwidth, and compares it with two existing tools, IGI and Pathload, over 400 different Internet paths. The comparison focuses on accuracy, failure patterns, probe overhead, and implementation issues. The paper verifies the measured available bandwidth by comparing it to Multi-Router Traffic Grapher (MRTG) data and by measuring how each tool responds to induced changes in available bandwidth.The measurements show that Spruce is more accurate than Pathload and IGI. Pathload tends to overestimate the available bandwidth whereas IGI becomes insensitive when the bottleneck utilization is large.", "title": "" }, { "docid": "a6dd5d87c259c279e5c67b749dfe8643", "text": "Grasping is one of the most significant manipulation in everyday life, which can be influenced a lot by grasping order when there are several objects in the scene. Therefore, the manipulation relationships are needed to help robot better grasp and manipulate objects. This paper presents a new convolutional neural network architecture called Visual Manipulation Relationship Network (VMRN), which is used to help robot detect targets and predict the manipulation relationships in real time. To implement end-to-end training and meet real-time requirements in robot tasks, we propose the Object Pairing Pooling Layer (OPL), which can help to predict all manipulation relationships in one forward process. To train VMRN, we collect a dataset named Visual Manipulation Relationship Dataset (VMRD) consisting of 5185 images with more than 17000 object instances and the manipulation relationships between all possible pairs of objects in every image, which is labeled by the manipulation relationship tree. The experiment results show that the new network architecture can detect objects and predict manipulation relationships simultaneously and meet the real-time requirements in robot tasks.", "title": "" }, { "docid": "032a05f5842c0f0e25de538687c0b450", "text": "In this paper, the low-voltage ride-through (LVRT) capability of the doubly fed induction generator (DFIG)-based wind energy conversion system in the asymmetrical grid fault situation is analyzed, and the control scheme for the system is proposed to follow the requirements defined by the grid codes. As analyzed in the paper, the control efforts of the negative-sequence current are much higher than that of the positive-sequence current for the DFIG. As a result, the control capability of the DFIG restrained by the dc-link voltage will degenerate for the fault type with higher negative-sequence voltage component and 2φ fault turns out to be the most serious scenario for the LVRT problem. When the fault location is close to the grid connection point, the DFIG may be out of control resulting in non-ride-through zones. In the worst circumstance when LVRT can succeed, the maximal positive-sequence reactive current supplied by the DFIG is around 0.4 pu, which coordinates with the present grid code. Increasing the power rating of the rotor-side converter can improve the LVRT capability of the DFIG but induce additional costs. Based on the analysis, an LVRT scheme for the DFIG is also proposed by taking account of the code requirements and the control capability of the converters. As verified by the simulation and experimental results, the scheme can promise the DFIG to supply the defined positive-sequence reactive current to support the power grid and mitigate the oscillations in the generator torque and dc-link voltage, which improves the reliability of the wind farm and the power system.", "title": "" }, { "docid": "f577f970f841d8dee34e524ba661e727", "text": "The rapid growth in the amount of user-generated content (UGCs) online necessitates for social media companies to automatically extract knowledge structures (concepts) from user-generated images (UGIs) and user-generated videos (UGVs) to provide diverse multimedia-related services. For instance, recommending preference-aware multimedia content, the understanding of semantics and sentics from UGCs, and automatically computing tag relevance for UGIs are benefited from knowledge structures extracted from multiple modalities. Since contextual information captured by modern devices in conjunction with a media item greatly helps in its understanding, we leverage both multimedia content and contextual information (eg., spatial and temporal metadata) to address above-mentioned social media problems in our doctoral research. We present our approaches, results, and works in progress on these problems.", "title": "" }, { "docid": "18e1a3bbb95237862f9d48cf18ce24f1", "text": "To improve real-time control performance and reduce possible negative impacts of photovoltaic (PV) systems, an accurate forecasting of PV output is required, which is an important function in the operation of an energy management system (EMS) for distributed energy resources. In this paper, a weather-based hybrid method for 1-day ahead hourly forecasting of PV power output is presented. The proposed approach comprises classification, training, and forecasting stages. In the classification stage, the self-organizing map (SOM) and learning vector quantization (LVQ) networks are used to classify the collected historical data of PV power output. The training stage employs the support vector regression (SVR) to train the input/output data sets for temperature, probability of precipitation, and solar irradiance of defined similar hours. In the forecasting stage, the fuzzy inference method is used to select an adequate trained model for accurate forecast, according to the weather information collected from Taiwan Central Weather Bureau (TCWB). The proposed approach is applied to a practical PV power generation system. Numerical results show that the proposed approach achieves better prediction accuracy than the simple SVR and traditional ANN methods.", "title": "" }, { "docid": "2b34bd00f114ddd7758bf4878edcab45", "text": "This paper considers an UWB balun optimized for a frequency band from 6 to 8.5 GHz. The balun provides a transition from unbalanced coplanar waveguide (CPW) to balanced coplanar stripline (CPS), which is suitable for feeding broadband coplanar antennas such as Vivaldi or bow-tie antennas. It is shown, that applying a solid ground plane under the CPS-to-CPS transition enables decreasing its area by a factor of 4.7. Such compact balun can be used for feeding uniplanar antennas, while significantly saving substrate area. Several transition configurations have been fabricated for single and double-layer configurations. They have been verified by comparison with results both from a full-wave electromagnetic (EM) simulation and experimental measurements.", "title": "" }, { "docid": "f8f1e4f03c6416e9d9500472f5e00dbe", "text": "Template attack is the most common and powerful profiled side channel attack. It relies on a realistic assumption regarding the noise of the device under attack: the probability density function of the data is a multivariate Gaussian distribution. To relax this assumption, a recent line of research has investigated new profiling approaches mainly by applying machine learning techniques. The obtained results are commensurate, and in some particular cases better, compared to template attack. In this work, we propose to continue this recent line of research by applying more sophisticated profiling techniques based on deep learning. Our experimental results confirm the overwhelming advantages of the resulting new attacks when targeting both unprotected and protected cryptographic implementations.", "title": "" }, { "docid": "0bef4c6547ac1266686bf53fe93f05fc", "text": "According to some estimates, more than half of the world's population is multilingual to some extent. Because of the centrality of language use to human experience and the deep connections between linguistic and nonlinguistic processing, it would not be surprising to find that there are interactions between bilingualism and cognitive and brain processes. The present review uses the framework of experience-dependent plasticity to evaluate the evidence for systematic modifications of brain and cognitive systems that can be attributed to bilingualism. The review describes studies investigating the relation between bilingualism and cognition in infants and children, younger and older adults, and patients, using both behavioral and neuroimaging methods. Excluded are studies whose outcomes focus primarily on linguistic abilities because of their more peripheral contribution to the central question regarding experience-dependent changes to cognition. Although most of the research discussed in the review reports some relation between bilingualism and cognitive or brain outcomes, several areas of research, notably behavioral studies with young adults, largely fail to show these effects. These discrepancies are discussed and considered in terms of methodological and conceptual issues. The final section proposes an account based on \"executive attention\" to explain the range of research findings and to set out an agenda for the next steps in this field. (PsycINFO Database Record", "title": "" }, { "docid": "1de0fb2c19bf7a61ac2c89af49e3b386", "text": "Many situations in human life present choices between (a) narrowly preferred particular alternatives and (b) narrowly less preferred (or aversive) particular alternatives that nevertheless form part of highly preferred abstract behavioral patterns. Such alternatives characterize problems of self-control. For example, at any given moment, a person may accept alcoholic drinks yet also prefer being sober to being drunk over the next few days. Other situations present choices between (a) alternatives beneficial to an individual and (b) alternatives that are less beneficial (or harmful) to the individual that would nevertheless be beneficial if chosen by many individuals. Such alternatives characterize problems of social cooperation; choices of the latter alternative are generally considered to be altruistic. Altruism, like self-control, is a valuable temporally-extended pattern of behavior. Like self-control, altruism may be learned and maintained over an individual's lifetime. It needs no special inherited mechanism. Individual acts of altruism, each of which may be of no benefit (or of possible harm) to the actor, may nevertheless be beneficial when repeated over time. However, because each selfish decision is individually preferred to each altruistic decision, people can benefit from altruistic behavior only when they are committed to an altruistic pattern of acts and refuse to make decisions on a case-by-case basis.", "title": "" }, { "docid": "013ca7d513b658f2dac68644a915b43a", "text": "Money laundering a suspicious fund transfer between accounts without names which affects and threatens the stability of countries economy. The growth of internet technology and loosely coupled nature of fund transfer gateways helps the malicious user’s to perform money laundering. There are many approaches has been discussed earlier for the detection of money laundering and most of them suffers with identifying the root of money laundering. We propose a time variant approach using behavioral patterns to identify money laundering. In this approach, the transaction logs are split into various time window and for each account specific to the fund transfer the time value is split into different time windows and we generate the behavioral pattern of the user. The behavioral patterns specifies the method of transfer between accounts and the range of amounts and the frequency of destination accounts and etc.. Based on generated behavioral pattern , the malicious transfers and accounts are identified to detect the malicious root account. The proposed approach helps to identify more suspicious accounts and their group accounts to perform money laundering identification. The proposed approach has produced efficient results with less time complexity.", "title": "" }, { "docid": "a8f5f7c147c1ac8cabf86d4809aa3f65", "text": "Structural gene rearrangements resulting in gene fusions are frequent events in solid tumours. The identification of certain activating fusions can aid in the diagnosis and effective treatment of patients with tumours harbouring these alterations. Advances in the techniques used to identify fusions have enabled physicians to detect these alterations in the clinic. Targeted therapies directed at constitutively activated oncogenic tyrosine kinases have proven remarkably effective against cancers with fusions involving ALK, ROS1, or PDGFB, and the efficacy of this approach continues to be explored in malignancies with RET, NTRK1/2/3, FGFR1/2/3, and BRAF/CRAF fusions. Nevertheless, prolonged treatment with such tyrosine-kinase inhibitors (TKIs) leads to the development of acquired resistance to therapy. This resistance can be mediated by mutations that alter drug binding, or by the activation of bypass pathways. Second-generation and third-generation TKIs have been developed to overcome resistance, and have variable levels of activity against tumours harbouring individual mutations that confer resistance to first-generation TKIs. The rational sequential administration of different inhibitors is emerging as a new treatment paradigm for patients with tumours that retain continued dependency on the downstream kinase of interest.", "title": "" }, { "docid": "296025d4851569031f0ebe36d792fadc", "text": "In this paper we present the first, to the best of our knowledge, discourse parser that is able to predict non-tree DAG structures. We use Integer Linear Programming (ILP) to encode both the objective function and the constraints as global decoding over local scores. Our underlying data come from multi-party chat dialogues, which require the prediction of DAGs. We use the dependency parsing paradigm, as has been done in the past (Muller et al., 2012; Li et al., 2014; Afantenos et al., 2015), but we use the underlying formal framework of SDRT and exploit SDRT’s notions of left and right distributive relations. We achieve an Fmeasure of 0.531 for fully labeled structures which beats the previous state of the art.", "title": "" }, { "docid": "33eebe279e80452aec3e2e5bd28a708d", "text": "Context aware recommender systems go beyond the traditional personalized recommendation models by incorporating a form of situational awareness. They provide recommendations that not only correspond to a user's preference profile, but that are also tailored to a given situation or context. We consider the setting in which contextual information is represented as a subset of an item feature space describing short-term interests or needs of a user in a given situation. This contextual information can be provided by the user in the form of an explicit query, or derived implicitly.\n We propose a unified probabilistic model that integrates user profiles, item representations, and contextual information. The resulting recommendation framework computes the conditional probability of each item given the user profile and the additional context. These probabilities are used as recommendation scores for ranking items. Our model is an extension of the Latent Dirichlet Allocation (LDA) model that provides the capability for joint modeling of users, items, and the meta-data associated with contexts. Each user profile is modeled as a mixture of the latent topics. The discovered latent topics enable our system to handle missing data in item features. We demonstrate the application of our framework for article and music recommendation. In the latter case, the set of popular tags from social tagging Web sites are used for context descriptions. Our evaluation results show that considering context can help improve the quality of recommendations.", "title": "" }, { "docid": "c7188c78b818b9d487b76b9d2c731992", "text": "Overview 7 Purpose of the study 7 Background to the study 7 The place of CIL in relation to traditional disciplines 10 Research questions, participants, and instruments 12 Computer and information literacy framework 15 Overview 15 Defining computer and information literacy 16 Structure of the computer and information literacy construct 18 Strands and aspects 19 Contextual framework 25 Overview 25 Classification of contextual factors 25 Contextual levels and variables 27 Assessment design 35 The ICILS test design 35 The ICILS test instrument 36 Types of assessment task 36 Mapping test items to the CIL framework 43 The ICILS student questionnaire and context instruments 44 Foreword As an international, nonprofit cooperative of national research institutions and governmental research agencies, the International Association for the Evaluation of Educational Achievement (IEA) has conducted more than 30 large-scale comparative studies in countries around the world. These studies have reported on educational policies, practices, and learning outcomes on a wide range of topics and subject matters. These investigations have proven to be a key resource for monitoring educational quality and progress within individual countries and across a broad international context. The International Computer and Information Literacy Study (ICILS) follows a series of earlier IEA studies that had, as their particular focus, information and communication technologies (ICT) in education. The first of these, the Computers in Education Study (COMPED), was carried out in 1989 and again in 1992 for the purpose of reporting on the educational use of computers in the context of emerging governmental initiatives to implement ICT in schools. The next series of projects in this area was the Second These projects provided an update on the implementation of computer technology resources in schools and their utilization in the teaching process. The continuing rapid development of computer and other information technologies has transformed the environment in which young people access, create, and share information. Many countries, having recognized the imperative of digital technology in all its forms, acknowledge the need to educate their citizens in the use of these technologies so that they and their society can secure the future economic and social benefits of proficiency in the use of digital technologies. Within this context, many questions relating to the efficacy of instructional programs and how instruction is progressed in the area of digital literacy arise. ICILS represents the first international comparative study to investigate how students are developing the set of knowledge, understanding, …", "title": "" }, { "docid": "4b3d890a8891cd8c84713b1167383f6f", "text": "The present research tested the hypothesis that concepts of gratitude are prototypically organized and explored whether lay concepts of gratitude are broader than researchers' concepts of gratitude. In five studies, evidence was found that concepts of gratitude are indeed prototypically organized. In Study 1, participants listed features of gratitude. In Study 2, participants reliably rated the centrality of these features. In Studies 3a and 3b, participants perceived that a hypothetical other was experiencing more gratitude when they read a narrative containing central as opposed to peripheral features. In Study 4, participants remembered more central than peripheral features in gratitude narratives. In Study 5a, participants generated more central than peripheral features when they wrote narratives about a gratitude incident, and in Studies 5a and 5b, participants generated both more specific and more generalized types of gratitude in similar narratives. Throughout, evidence showed that lay conceptions of gratitude are broader than current research definitions.", "title": "" }, { "docid": "f32213171b0509e23770333ba4874cb5", "text": "14 Regulatory, safety, and environmental issues have prompted the development of aqueous 15 enzymatic extraction (AEE) for extracting components from oil-bearing materials. The 16 emulsion resulting from AEE requires de-emulsification to separate the oil; when enzymes 17 are used for this purpose, the method is known as aqueous enzymatic emulsion de18 emulsification (AEED). In general, enzyme assisted oil extraction is known to yield oil 19 having highly favourable characteristics. This review covers technological aspects of 20 enzyme assisted oil extraction, and explores the quality characteristics of the oils obtained, 21 focusing particularly on recent efforts undertaken to improve process economics by 22 recovering and reusing enzymes. 23 24", "title": "" }, { "docid": "f76ccb78acfe64aaaac88ca42fa5b6ff", "text": "In Civil Infrastructure System (CIS) applications, the requirement of blending synthetic and physical objects distinguishes Augmented Reality (AR) from other visualization technologies in three aspects: (1) it reinforces the connections between people and objects, and promotes engineers’ appreciation about their working context; (2) it allows engineers to perform field tasks with the awareness of both the physical and synthetic environment; and (3) it offsets the significant cost of 3D Model Engineering by including the real world background. This paper reviews critical problems in AR and investigates technical approaches to address the fundamental challenges that prevent the technology from being usefully deployed in CIS applications, such as the alignment of virtual objects with the real environment continuously across time and space; blending of virtual entities with their real background faithfully to create a sustained illusion of co-existence; and the integration of these methods to a scalable and extensible computing AR framework that is openly accessible to the teaching and research community. The research findings have been evaluated in several challenging CIS applications where the potential of having a significant economic and social impact is high. Examples of validation test beds implemented include an AR visual excavator-utility collision avoidance system that enables workers to ‘‘see’’ buried utilities hidden under the ground surface, thus helping prevent accidental utility strikes; an AR post-disaster reconnaissance framework that enables building inspectors to rapidly evaluate and quantify structural damage sustained by buildings in seismic events such as earthquakes or blasts; and a tabletop collaborative AR visualization framework that allows multiple users to observe and interact with visual simulations of engineering processes. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6885684efb07ed9502eb0dafdfb54e95", "text": "Both lesion and functional imaging studies in humans, as well as neurophysiological studies in nonhuman primates, demonstrate the importance of the prefrontal cortex in representing the emotional value of sensory stimuli. Here we investigated single-neuron responses to emotional stimuli in an awake person with normal intellect. Recording from neurons within healthy tissue in ventral sites of the right prefrontal cortex, we found short-latency (120–160 ms) responses selective for aversive visual stimuli.", "title": "" }, { "docid": "5031c9b3dfbe2bf2a07a4f1414f594e0", "text": "BACKGROUND\nWe assessed the effects of a three-year national-level, ministry-led health information system (HIS) data quality intervention and identified associated health facility factors.\n\n\nMETHODS\nMonthly summary HIS data concordance between a gold standard data quality audit and routine HIS data was assessed in 26 health facilities in Sofala Province, Mozambique across four indicators (outpatient consults, institutional births, first antenatal care visits, and third dose of diphtheria, pertussis, and tetanus vaccination) and five levels of health system data aggregation (daily facility paper registers, monthly paper facility reports, monthly paper district reports, monthly electronic district reports, and monthly electronic provincial reports) through retrospective yearly audits conducted July-August 2010-2013. We used mixed-effects linear models to quantify changes in data quality over time and associated health system determinants.\n\n\nRESULTS\nMedian concordance increased from 56.3% during the baseline period (2009-2010) to 87.5% during 2012-2013. Concordance improved by 1.0% (confidence interval [CI]: 0.60, 1.5) per month during the intervention period of 2010-2011 and 1.6% (CI: 0.89, 2.2) per month from 2011-2012. No significant improvements were observed from 2009-2010 (during baseline period) or 2012-2013. Facilities with more technical staff (aβ: 0.71; CI: 0.14, 1.3), more first antenatal care visits (aβ: 3.3; CI: 0.43, 6.2), and fewer clinic beds (aβ: -0.94; CI: -1.7, -0.20) showed more improvements. Compared to facilities with no stock-outs, facilities with five essential drugs stocked out had 51.7% (CI: -64.8 -38.6) lower data concordance.\n\n\nCONCLUSIONS\nA data quality intervention was associated with significant improvements in health information system data concordance across public-sector health facilities in rural and urban Mozambique. Concordance was higher at those facilities with more human resources for health and was associated with fewer clinic-level stock-outs of essential medicines. Increased investments should be made in data audit and feedback activities alongside targeted efforts to improve HIS data in low- and middle-income countries.", "title": "" } ]
scidocsrr
85407563d625faeaa04a811303326ce2
Collaborative filtering with temporal dynamics
[ { "docid": "7ce79a08969af50c1712f0e291dd026c", "text": "Collaborative filtering (CF) is valuable in e-commerce, and for direct recommendations for music, movies, news etc. But today's systems have several disadvantages, including privacy risks. As we move toward ubiquitous computing, there is a great potential for individuals to share all kinds of information about places and things to do, see and buy, but the privacy risks are severe. In this paper we describe a new method for collaborative filtering which protects the privacy of individual data. The method is based on a probabilistic factor analysis model. Privacy protection is provided by a peer-to-peer protocol which is described elsewhere, but outlined in this paper. The factor analysis approach handles missing data without requiring default values for them. We give several experiments that suggest that this is most accurate method for CF to date. The new algorithm has other advantages in speed and storage over previous algorithms. Finally, we suggest applications of the approach to other kinds of statistical analyses of survey or questionaire data.", "title": "" }, { "docid": "f7d535f9a5eeae77defe41318d642403", "text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.", "title": "" } ]
[ { "docid": "da86c72fff98d51d4d78ece7516664fe", "text": "OBJECTIVE\nThe purpose of this study was to establish an Indian reference for normal fetal nasal bone length at 16-26 weeks of gestation.\n\n\nMETHODS\nThe fetal nasal bone was measured by ultrasound in 2,962 pregnant women at 16-26 weeks of gestation from 2004 to 2009 by a single operator, who performed three measurements for each woman when the fetus was in the midsagittal plane and the nasal bone was between a 45 and 135° angle to the ultrasound beam. All neonates were examined after delivery to confirm the absence of congenital abnormalities.\n\n\nRESULTS\nThe median nasal bone length increased with gestational age from 3.3 mm at 16 weeks to 6.65 mm at 26 weeks in a linear relationship. The fifth percentile nasal bone lengths were 2.37, 2.4, 2.8, 3.5, 3.6, 3.9, 4.3, 4.6, 4.68, 4.54, and 4.91 mm at 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, and 26 weeks, respectively.\n\n\nCONCLUSIONS\nWe have established the nasal bone length in South Indian fetuses at 16-26 weeks of gestation and there is progressive increase in the fifth percentile of nasal bone length with advancing gestational age. Hence, gestational age should be considered while defining hypoplasia of the nasal bone.", "title": "" }, { "docid": "8ea08d331deff938cddbe10f16a25b9d", "text": "High-throughput RNA sequencing is an increasingly accessible method for studying gene structure and activity on a genome-wide scale. A critical step in RNA-seq data analysis is the alignment of partial transcript reads to a reference genome sequence. To assess the performance of current mapping software, we invited developers of RNA-seq aligners to process four large human and mouse RNA-seq data sets. In total, we compared 26 mapping protocols based on 11 programs and pipelines and found major performance differences between methods on numerous benchmarks, including alignment yield, basewise accuracy, mismatch and gap placement, exon junction discovery and suitability of alignments for transcript reconstruction. We observed concordant results on real and simulated RNA-seq data, confirming the relevance of the metrics employed. Future developments in RNA-seq alignment methods would benefit from improved placement of multimapped reads, balanced utilization of existing gene annotation and a reduced false discovery rate for splice junctions.", "title": "" }, { "docid": "91a0528996ab5ea3ced5f88bbfff6d35", "text": "In this paper, automatic motion control is investigated for wheeled inverted pendulum (WIP) models, which have been widely applied for modeling of a large range of two wheeled modern vehicles. First, the underactuated WIP model is decomposed into a fully actuated second-order subsystem Σa consisting of planar movement of vehicle forward motion and yaw angular motions, and a passive (nonactuated) first-order subsystem Σb of pendulum tilt motion. Due to the unknown dynamics of subsystem Σa and universal approximation ability of neural network (NN), an adaptive NN scheme has been employed for motion control of subsystem Σa. Model reference approach has been used, whereas the reference model is optimized by finite time linear quadratic regulation technique. Inspired by human control strategy of inverted pendulum, the tilt angular motion in the passive subsystem Σb has been indirectly controlled using the dynamic coupling with planar forward motion of subsystem Σa, such that the satisfactory tracking of set tilt angle can be guaranteed. Rigorous theoretic analysis has been established, and simulation studies have been performed to demonstrate the developed method.", "title": "" }, { "docid": "c45447fd682f730f350bae77c835b63a", "text": "In this paper, we demonstrate a high heat resistant bonding method by Cu/Sn transient liquid phase sintering (TLPS) method can be applied to die-attachment of silicon carbide (SiC)-MOSFET in high temperature operation power module. The die-attachment is made of nano-composite Cu/Sn TLPS paste. The die shear strength was 40 MPa for 3 × 3 mm2 SiC chip after 1,000 cycles of thermal cycle testing between −40 °C and 250 °C. This indicated a high reliability of Cu/Sn die-attachment. The thermal resistance of the Cu/Sn die-attachment was evaluated by transient thermal analysis using a sample in which the SiC-MOSFET (die size: 4.04 × 6.44 mm2) was bonded with Cu/Sn die-attachment. The thermal resistance of Cu/Sn die-attachment was 0.13 K/W, which was comparable to the one of Au/Ge die-attachment (0.12 K/W). The validity of nano-composite Cu/Sn TLPS paste as a die-attachment for high-temperature operation SiC power module is confirmed.", "title": "" }, { "docid": "b3450073ad3d6f2271d6a56fccdc110a", "text": "OBJECTIVE\nMindfulness-based therapies (MBTs) have been shown to be efficacious in treating internally focused psychological disorders (e.g., depression); however, it is still unclear whether MBTs provide improved functioning and symptom relief for individuals with externalizing disorders, including ADHD. To clarify the literature on the effectiveness of MBTs in treating ADHD and to guide future research, an effect-size analysis was conducted.\n\n\nMETHOD\nA systematic review of studies published in PsycINFO, PubMed, and Google Scholar was completed from the earliest available date until December 2014.\n\n\nRESULTS\nA total of 10 studies were included in the analysis of inattention and the overall effect size was d = -.66. A total of nine studies were included in the analysis of hyperactivity/impulsivity and the overall effect was calculated at d = -.53.\n\n\nCONCLUSION\nResults of this study highlight the possible benefits of MBTs in reducing symptoms of ADHD.", "title": "" }, { "docid": "0ccfe04a4426e07dcbd0260d9af3a578", "text": "We present an efficient algorithm to perform approximate offsetting operations on geometric models using GPUs. Our approach approximates the boundary of an object with point samples and computes the offset by merging the balls centered at these points. The underlying approach uses Layered Depth Images (LDI) to organize the samples into structured points and performs parallel computations using multiple cores. We use spatial hashing to accelerate intersection queries and balance the workload among various cores. Furthermore, the problem of offsetting with a large distance is decomposed into successive offsetting using smaller distances. We derive bounds on the accuracy of offset computation as a function of the sampling rate of LDI and offset distance. In practice, our GPU-based algorithm can accurately compute offsets of models represented using hundreds of thousands of points in a few seconds on GeForce GTX 580 GPU. We observe more than 100 times speedup over prior serial CPU-based approximate offset computation algorithms.", "title": "" }, { "docid": "e32b1cf52237312436918c2666283c38", "text": "Early patterns of Digg diggs and YouTube views reflect long-term user interest.", "title": "" }, { "docid": "c075c26fcfad81865c58a284013c0d33", "text": "A novel pulse compression technique is developed that improves the axial resolution of an ultrasonic imaging system and provides a boost in the echo signal-to-noise ratio (eSNR). The new technique, called the resolution enhancement compression (REC) technique, was validated with simulations and experimental measurements. Image quality was examined in terms of three metrics: the cSNR, the bandwidth, and the axial resolution through the modulation transfer function (MTF). Simulations were conducted with a weakly-focused, single-element ultrasound source with a center frequency of 2.25 MHz. Experimental measurements were carried out with a single-element transducer (f/3) with a center frequency of 2.25 MHz from a planar reflector and wire targets. In simulations, axial resolution of the ultrasonic imaging system was almost doubled using the REC technique (0.29 mm) versus conventional pulsing techniques (0.60 mm). The -3 dB pulse/echo bandwidth was more than doubled from 48% to 97%, and maximum range sidelobes were -40 dB. Experimental measurements revealed an improvement in axial resolution using the REC technique (0.31 mm) versus conventional pulsing (0.44 mm). The -3 dB pulse/echo bandwidth was doubled from 56% to 113%, and maximum range sidelobes were observed at -45 dB. In addition, a significant gain in eSNR (9 to 16.2 dB) was achieved", "title": "" }, { "docid": "b0e25c5e16c5e9f9dc26ac5fa8c3b74c", "text": "Information security awareness and behavior: a theory-based literature review Benedikt Lebek, Jörg Uffen, Markus Neumann, Bernd Hohler, Michael H. Breitner, Article information: To cite this document: Benedikt Lebek, Jörg Uffen, Markus Neumann, Bernd Hohler, Michael H. Breitner, (2014) \"Information security awareness and behavior: a theory-based literature review\", Management Research Review, Vol. 37 Issue: 12, pp.1049-1092, https://doi.org/10.1108/MRR-04-2013-0085 Permanent link to this document: https://doi.org/10.1108/MRR-04-2013-0085", "title": "" }, { "docid": "4f8a52941e24de8ce82ba31cd3250deb", "text": "BACKGROUND\nThere is an increasing use of technology for teaching and learning in medical education but often the use of educational theory to inform the design is not made explicit. The educational theories, both normative and descriptive, used by medical educators determine how the technology is intended to facilitate learning and may explain why some interventions with technology may be less effective compared with others.\n\n\nAIMS\nThe aim of this study is to highlight the importance of medical educators making explicit the educational theories that inform their design of interventions using technology.\n\n\nMETHOD\nThe use of illustrative examples of the main educational theories to demonstrate the importance of theories informing the design of interventions using technology.\n\n\nRESULTS\nHighlights the use of educational theories for theory-based and realistic evaluations of the use of technology in medical education.\n\n\nCONCLUSION\nAn explicit description of the educational theories used to inform the design of an intervention with technology can provide potentially useful insights into why some interventions with technology are more effective than others. An explicit description is also an important aspect of the scholarship of using technology in medical education.", "title": "" }, { "docid": "45fb31643f4fd53b08c51818f284f2df", "text": "This paper introduces a new type of fuzzy inference systems, denoted as dynamic evolving neural-fuzzy inference system (DENFIS), for adaptive online and offline learning, and their application for dynamic time series prediction. DENFIS evolve through incremental, hybrid (supervised/unsupervised), learning, and accommodate new input data, including new features, new classes, etc., through local element tuning. New fuzzy rules are created and updated during the operation of the system. At each time moment, the output of DENFIS is calculated through a fuzzy inference system based on -most activated fuzzy rules which are dynamically chosen from a fuzzy rule set. Two approaches are proposed: 1) dynamic creation of a first-order Takagi–Sugeno-type fuzzy rule set for a DENFIS online model; and 2) creation of a first-order Takagi–Sugeno-type fuzzy rule set, or an expanded high-order one, for a DENFIS offline model. A set of fuzzy rules can be inserted into DENFIS before or during its learning process. Fuzzy rules can also be extracted during or after the learning process. An evolving clustering method (ECM), which is employed in both online and offline DENFIS models, is also introduced. It is demonstrated that DENFIS can effectively learn complex temporal sequences in an adaptive way and outperform some well-known, existing models.", "title": "" }, { "docid": "8e52cdff14dddd82a4ad8fc5b967c1b2", "text": "Learning-based binary hashing has become a powerful paradigm for fast search and retrieval in massive databases. However, due to the requirement of discrete outputs for the hash functions, learning such functions is known to be very challenging. In addition, the objective functions adopted by existing hashing techniques are mostly chosen heuristically. In this paper, we propose a novel generative approach to learn hash functions through Minimum Description Length principle such that the learned hash codes maximally compress the dataset and can also be used to regenerate the inputs. We also develop an efficient learning algorithm based on the stochastic distributional gradient, which avoids the notorious difficulty caused by binary output constraints, to jointly optimize the parameters of the hash function and the associated generative model. Extensive experiments on a variety of large-scale datasets show that the proposed method achieves better retrieval results than the existing state-of-the-art methods.", "title": "" }, { "docid": "3725224178318d33b4c8ceecb6f03cfd", "text": "The 'chain of survival' has been a useful tool for improving the understanding of, and the quality of the response to, cardiac arrest for many years. In the 2005 European Resuscitation Council Guidelines the importance of recognising critical illness and preventing cardiac arrest was highlighted by their inclusion as the first link in a new four-ring 'chain of survival'. However, recognising critical illness and preventing cardiac arrest are complex tasks, each requiring the presence of several essential steps to ensure clinical success. This article proposes the adoption of an additional chain for in-hospital settings--a 'chain of prevention'--to assist hospitals in structuring their care processes to prevent and detect patient deterioration and cardiac arrest. The five rings of the chain represent 'staff education', 'monitoring', 'recognition', the 'call for help' and the 'response'. It is believed that a 'chain of prevention' has the potential to be understood well by hospital clinical staff of all grades, disciplines and specialties, patients, and their families and friends. The chain provides a structure for research to identify the importance of each of the various components of rapid response systems.", "title": "" }, { "docid": "ceb42399b7cd30b15d27c30d7c4b57b6", "text": "In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated from an informationtheoretic perspective. The relationships among the capacity r egion of broadcast channels and two rate regions achieved by NOMA and time-division multiple access (TDMA) are illustrated first. Then, the performance of NOMA is evaluated by considering TDMA as the benchmark, where both the sum rate and the individual use r rates are used as the criteria. In a wireless downlink scenar io with user pairing, the developed analytical results show that NOMA can outperform TDMA not only for the sum rate but also for each user’s individual rate, particularly when the difference between the users’ channels is large. I. I NTRODUCTION Because of its superior spectral efficiency, non-orthogona l multiple access (NOMA) has been recognized as a promising technique to be used in the fifth generation (5G) networks [1] – [4]. NOMA utilizes the power domain for achieving multiple access, i.e., different users are served at different power levels. Unlike conventional orthogonal MA, such as timedivision multiple access (TDMA), NOMA faces strong cochannel interference between different users, and success ive interference cancellation (SIC) is used by the NOMA users with better channel conditions for interference managemen t. The concept of NOMA is essentially a special case of superposition coding developed for broadcast channels (BC ). Cover first found the capacity region of a degraded discrete memoryless BC by using superposition coding [5]. Then, the capacity region of the Gaussian BC with single-antenna terminals was established in [6]. Moreover, the capacity re gion of the multiple-input multiple-output (MIMO) Gaussian BC was found in [7], by applying dirty paper coding (DPC) instea d of superposition coding. This paper mainly focuses on the single-antenna scenario. Specifically, consider a Gaussian BC with a single-antenna transmitter and two single-antenna receivers, where each r eceiver is corrupted by additive Gaussian noise with unit var iance. Denote the ordered channel gains from the transmitter to the two receivers byhw andhb, i.e., |hw| < |hb|. For a given channel pair(hw, hb), the capacity region is given by [6] C , ⋃ a1+a2=1, a1, a2 ≥ 0 { (R1, R2) : R1, R2 ≥ 0, R1≤ log2 ( 1+ a1x 1+a2x ) , R2≤ log2 (1+a2y) }", "title": "" }, { "docid": "b6e5051e3f7ed76da2e648954be52b1e", "text": "Botnets become widespread in wired and wireless networks, whereas the relevant research is still in the initial stage. In this paper, a survey of botnets is provided. We first discuss fundamental concepts of botnets, including formation and exploitation, lifecycle, and two major kinds of topologies. Several related attacks, detection, tracing, and countermeasures, are then introduced, followed by recent research work and possible future challenges.", "title": "" }, { "docid": "06fe4547495c597a0f7052efd78d5a04", "text": "The American cockroach, Periplaneta americana, provides a successful model for the study of legged locomotion. Sensory regulation and the relative importance of sensory feedback vs. central control in animal locomotion are key aspects in our understanding of locomotive behavior. Here we introduce the cockroach model and describe the basic characteristics of the neural generation and control of walking and running in this insect. We further provide a brief overview of some recent studies, including mathematical modeling, which have contributed to our knowledge of sensory control in cockroach locomotion. We focus on two sensory mechanisms and sense organs, those providing information related to loading and unloading of the body and the legs, and leg-movement-related sensory receptors, and present evidence for the instrumental role of these sensory signals in inter-leg locomotion control. We conclude by identifying important open questions and indicate future perspectives.", "title": "" }, { "docid": "a4b6b6a8ea8fc48d90576f641febf5fb", "text": "The recent introduction of the diagnostic category developmental coordination disorder (DCD) (American Psychiatric Association [APA], 1987, 1994), has generated confusion among researchers and clinicians in many fields, including occupational therapy. Although the diagnostic criteria appear to be similar to those used to define clumsy children, children with developmental dyspraxia, or children with sensory integrative dysfunction, we are left with the question: Are children who receive the diagnosis of DCD the same as those who receive the other diagnoses, a subgroup, or an entirely distinct group of children? This article will examine the theoretical and empirical literature and use the results to support the thesis that these terms are not interchangeable and yet are not being used in the literature in a way that clearly defines each subgroup of children. Clear definitions and characteristic features need to be identified and associated with each term to guide occupational therapy assessment and intervention and clinical research.", "title": "" }, { "docid": "a25fa0c0889b62b70bf95c16f9966cc4", "text": "We deal with the problem of document representation for the task of measuring semantic relatedness between documents. A document is represented as a compact concept graph where nodes represent concepts extracted from the document through references to entities in a knowledge base such as DBpedia. Edges represent the semantic and structural relationships among the concepts. Several methods are presented to measure the strength of those relationships. Concepts are weighted through the concept graph using closeness centrality measure which reflects their relevance to the aspects of the document. A novel similarity measure between two concept graphs is presented. The similarity measure first represents concepts as continuous vectors by means of neural networks. Second, the continuous vectors are used to accumulate pairwise similarity between pairs of concepts while considering their assigned weights. We evaluate our method on a standard benchmark for document similarity. Our method outperforms state-of-the-art methods including ESA (Explicit Semantic Annotation) while our concept graphs are much smaller than the concept vectors generated by ESA. Moreover, we show that by combining our concept graph with ESA, we obtain an even further improvement.", "title": "" }, { "docid": "049c1597f063f9c5fcc098cab8885289", "text": "When one captures images in low-light conditions, the images often suffer from low visibility. This poor quality may significantly degrade the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a very simple and effective method, named as LIME, to enhance low-light images. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G and B channels. Further, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging real-world low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts.", "title": "" }, { "docid": "e110425b3d464ac63b3d6db6417c0c82", "text": "Artificial intelligence has seen a number of breakthroughs in recent years, with games often serving as significant milestones. A common feature of games with these successes is that they involve information symmetry among the players, where all players have identical information. This property of perfect information, though, is far more common in games than in real-world problems. Poker is the quintessential game of imperfect information, and it has been a longstanding challenge problem in artificial intelligence. In this paper we introduce DeepStack, a new algorithm for imperfect information settings such as poker. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition about arbitrary poker situations that is automatically learned from selfplay games using deep learning. In a study involving dozens of participants and 44,000 hands of poker, DeepStack becomes the first computer program to beat professional poker players in heads-up no-limit Texas hold’em. Furthermore, we show this approach dramatically reduces worst-case exploitability compared to the abstraction paradigm that has been favored for over a decade.", "title": "" } ]
scidocsrr
439f0badabed4ba2dad0504225a44cff
Privacy-preserving Prediction
[ { "docid": "1d9004c4115c314f49fb7d2f44aaa598", "text": "We show by means of several examples that robust statistical estimators present an excellent starting point for differentially private estimators. Our algorithms use a new paradigm for differentially private mechanisms, which we call Propose-Test-Release (PTR), and for which we give a formal definition and general composition theorems.", "title": "" } ]
[ { "docid": "72b8d077bb96fcb4d14b09f9bb85132f", "text": "Locomotion at low Reynolds number is not possible with cycles of reciprocal motion, an example being the oscillation of a single pair of rigid paddles or legs. Here, I demonstrate the possibility of swimming with two or more pairs of legs. They are assumed to oscillate collectively in a metachronal wave pattern in a minimal model based on slender-body theory for Stokes flow. The model predicts locomotion in the direction of the traveling wave, as commonly observed along the body of free-swimming crustaceans. The displacement of the body and the swimming efficiency depend on the number of legs, the amplitude, and the phase of oscillations. This study shows that paddling legs with distinct orientations and phases offers a simple mechanism for driving flow.", "title": "" }, { "docid": "5a8d4bfb89468d432b7482062a0cbf2d", "text": "While “no one size fits all” is a sound philosophy for system designers to follow, it poses multiple challenges for application developers and system administrators. It can be hard for an application developer to pick one system when the needs of her application match the features of multiple “one size” systems. The choice becomes considerably harder when different components of an application fit the features of different “one size” systems. Considerable manual effort goes into creating and tuning such multi-system applications. An application’s data and workload properties may change over time, often in unpredictable and bursty ways. Consequently, the “one size” system that is best for an application can change over time. Adapting to change can be hard when application development is coupled tightly with any individual “one size” system. In this paper, we make the case for developing a new breed of Database Management Systems that we term DBMS. A DBMS contains multiple “one size” systems internally. An application specifies its execution requirements on aspects like performance, availability, consistency, change, and cost to the DBMS declaratively. For all requests (e.g., queries) made by the application, the DBMS will select the execution plan that meets the application’s requirements best. A unique aspect of the execution plan in a DBMS is that the plan includes the selection of one or more “one size” systems. The plan is then deployed and managed automatically on the selected system(s). If application requirements change beyond what was planned for originally by the DBMS, then the application can be reoptimized and redeployed; usually with no additional effort required from the application developer. The DBMS approach has the potential to address the challenges that application developers and system administrators face from the vast and growing number of “one size” systems today. However, this approach poses many research challenges that we discuss in this paper. We are taking the DBMS approach in a platform, called Cyclops, that we are building for continuous query execution. We will use Cyclops throughout the paper to give concrete illustrations of the benefits and challenges of the DBMS approach. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6 Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.", "title": "" }, { "docid": "e05fc780d1f3fd4061918e50f5dd26a0", "text": "The need for organizations to operate in changing environments is addressed by proposing an approach that integrates organizational development with information system (IS) development taking into account changes in the application context of the solution. This is referred to as Capability Driven Development (CDD). A meta-model representing business and IS designs consisting of goals, key performance indicators, capabilities, context and capability delivery patterns, is being proposed. The use of the meta-model is validated in three industrial case studies as part of an ongoing collaboration project, whereas one case is presented in the paper. Issues related to the use of the CDD approach, namely, CDD methodology and tool support are also discussed.", "title": "" }, { "docid": "bcd7af5c474d931c0a76b654775396c2", "text": "Task and motion planning subject to Linear Temporal Logic (LTL) specifications in complex, dynamic environments requires efficient exploration of many possible future worlds. Model-free reinforcement learning has proven successful in a number of challenging tasks, but shows poor performance on tasks that require long-term planning. In this work, we integrate Monte Carlo Tree Search with hierarchical neural net policies trained on expressive LTL specifications. We use reinforcement learning to find deep neural networks representing both low-level control policies and task-level “option policies” that achieve high-level goals. Our combined architecture generates safe and responsive motion plans that respect the LTL constraints. We demonstrate our approach in a simulated autonomous driving setting, where a vehicle must drive down a road in traffic, avoid collisions, and navigate an intersection, all while obeying rules of the road.", "title": "" }, { "docid": "0e4334595aeec579e8eb35b0e805282d", "text": "In this paper, we present madmom, an open-source audio processing and music information retrieval (MIR) library written in Python. madmom features a concise, NumPy-compatible, object oriented design with simple calling conventions and sensible default values for all parameters, which facilitates fast prototyping of MIR applications. Prototypes can be seamlessly converted into callable processing pipelines through madmom's concept of Processors, callable objects that run transparently on multiple cores. Processors can also be serialised, saved, and re-run to allow results to be easily reproduced anywhere. Apart from low-level audio processing, madmom puts emphasis on musically meaningful high-level features. Many of these incorporate machine learning techniques and madmom provides a module that implements some methods commonly used in MIR such as hidden Markov models and neural networks. Additionally, madmom comes with several state-of-the-art MIR algorithms for onset detection, beat, downbeat and meter tracking, tempo estimation, and chord recognition. These can easily be incorporated into bigger MIR systems or run as stand-alone programs.", "title": "" }, { "docid": "9377e5de9d7a440aa5e73db10aa630f4", "text": ". Micro-finance programmes targeting women became a major plank of donor poverty alleviation and gender strategies in the 1990s. Increasing evidence of the centrality of gender equality to poverty reduction and women’s higher credit repayment rates led to a general consensus on the desirability of targeting women. Not only ‘reaching’ but also ‘empowering’ women became the second official goal of the Micro-credit Summit Campaign.", "title": "" }, { "docid": "ca277d75e1e3631f64006ce962d8131d", "text": "This paper investigates the preand post-release impacts of incarceration on criminal behavior, economic wellbeing and family formation using new data from Harris County, Texas. The research design identifies exogenous variation in the extensive and intensive margins of incarceration by leveraging the random assignment of defendants to courtrooms. I develop a new data-driven estimation procedure to address multidimensional and non-monotonic sentencing patterns observed in the courtrooms in my data. My findings indicate that incarceration generates modest incapacitation effects, which are offset in the long-run by an increased likelihood of defendants reoffending after being released. Additional evidence finds that incarceration reduces post-release employment and wages, increases take-up of food stamps, decreases likelihood of marriage and increases the likelihood of divorce. Based on changes in defendant behavior alone, I estimate that a one-year prison term for marginal defendants conservatively generates $56,200 to $66,800 in social costs, which would require substantial general deterrence in the population to at least be welfare neutral.", "title": "" }, { "docid": "9a86609ecefc5780a49ca638be4de64c", "text": "In this paper, we propose an end-to-end capsule network for pixel level localization of actors and actions present in a video. The localization is performed based on a natural language query through which an actor and action are specified. We propose to encode both the video as well as textual input in the form of capsules, which provide more effective representation in comparison with standard convolution based features. We introduce a novel capsule based attention mechanism for fusion of video and text capsules for text selected video segmentation. The attention mechanism is performed via joint EM routing over video and text capsules for text selected actor and action localization. The existing works on actor-action localization are mainly focused on localization in a single frame instead of the full video. Different from existing works, we propose to perform the localization on all frames of the video. To validate the potential of the proposed network for actor and action localization on all the frames of a video, we extend an existing actor-action dataset (A2D) with annotations for all the frames. The experimental evaluation demonstrates the effectiveness of the proposed capsule network for text selective actor and action localization in videos, and it also improves upon the performance of the existing state-of-the art works on single frame-based localization. Figure 1: Overview of the proposed approach. For a given video, we want to localize the actor and action which are described by an input textual query. Capsules are extracted from both the video and the textual query, and a joint EM routing algorithm creates high level capsules, which are further used for localization of selected actors and actions.", "title": "" }, { "docid": "92b8206a1a5db0be7df28ed2e645aafc", "text": "Depthwise separable convolutions reduce the number of parameters and computation used in convolutional operations while increasing representational efficiency. They have been shown to be successful in image classification models, both in obtaining better models than previously possible for a given parameter count (the Xception architecture) and considerably reducing the number of parameters required to perform at a given level (the MobileNets family of architectures). Recently, convolutional sequence-to-sequence networks have been applied to machine translation tasks with good results. In this work, we study how depthwise separable convolutions can be applied to neural machine translation. We introduce a new architecture inspired by Xception and ByteNet, called SliceNet, which enables a significant reduction of the parameter count and amount of computation needed to obtain results like ByteNet, and, with a similar parameter count, achieves better results. In addition to showing that depthwise separable convolutions perform well for machine translation, we investigate the architectural changes that they enable: we observe that thanks to depthwise separability, we can increase the length of convolution windows, removing the need for filter dilation. We also introduce a new \"super-separable\" convolution operation that further reduces the number of parameters and computational cost of the models.", "title": "" }, { "docid": "dff09c99347f18c6a810b6334fe1f25c", "text": "We address the problem of automatically acquiring knowledge of event sequences from text, with the aim of providing a predictive model for use in narrative generation systems. We present a neural network model that simultaneously learns embeddings for words describing events, a function to compose the embeddings into a representation of the event, and a coherence function to predict the strength of association between two events. We introduce a new development of the narrative cloze evaluation task, better suited to a setting where rich information about events is available. We compare models that learn vector-space representations of the events denoted by verbs in chains centering on a single protagonist. We find that recent work on learning vector-space embeddings to capture word meaning can be effectively applied to this task, including simple incorporation of a verb’s arguments in the representation by vector addition. These representations provide a good initialization for learning the richer, compositional model of events with a neural network, vastly outperforming a number of baselines and competitive alternatives.", "title": "" }, { "docid": "1469d8fb125bff2f7e9179b4002d7bf6", "text": "The internet of things (IoT) is the latest web evolution that incorporates billions of devices (such as cameras, sensors, RFIDs, smart phones, and wearables), that are owned by different organizations and people who are deploying and using them for their own purposes. Federations of such IoT devices (we refer to as IoT things) can deliver the information needed to solve internet-scale problems that have been too difficult to obtain and harness before. To realize this unprecedented IoT potential, we need to develop IoT solutions for discovering the IoT devices each application needs, collecting and integrating their data, and distilling the high value information each application needs. We also need to provide solutions that permit doing these tasks in real-time, on the move, in the cloud, and securely. In this paper we present an overview of a collection of IoT solutions (which we have developed in partnerships with other prominent IoT innovators and refer to them collectively as IoT platform) for addressing these technical challenges and help springboard IoT to its potential. We also describe a variety of IoT applications that have utilized the proposed IoT platform to provide smart IoT services in the areas of smart farming, smart grids, and smart manufacturing. Finally, we discuss future research and a vision of the next generation IoT infrastructure.", "title": "" }, { "docid": "57a333a88a5c1f076fd096ec4cde4cba", "text": "2.1 HISTORY OF BIOTECHNOLOGY....................................................................................................6 2.2 MODERN BIOTECHNOLOGY ........................................................................................................6 2.3 THE GM DEBATE........................................................................................................................7 2.4 APPLYING THE PRECAUTIONARY APPROACH TO GMOS .............................................................8 2.5 RISK ASSESSMENT ISSUES ..........................................................................................................9 2.6 LEGAL CONTEXT ......................................................................................................................10 T", "title": "" }, { "docid": "0ff702ed9fed0393e16e120f8a704530", "text": "Location estimation is significant in mobile and ubiquitous computing systems. The complexity and smaller scale of the indoor environment impose a great impact on location estimation. The key of location estimation lies in the representation and fusion of uncertain information from multiple sources. The improvement of location estimation is a complicated and comprehensive issue. A lot of research has been done to address this issue. However, existing research typically focuses on certain aspects of the problem and specific methods. This paper reviews mainstream schemes on improving indoor location estimation from multiple levels and perspectives by combining existing works and our own working experiences. Initially, we analyze the error sources of common indoor localization techniques and provide a multilayered conceptual framework of improvement schemes for location estimation. This is followed by a discussion of probabilistic methods for location estimation, including Bayes filters, Kalman filters, extended Kalman filters, sigma-point Kalman filters, particle filters, and hiddenMarkov models.Then, we investigate the hybrid localization methods, including multimodal fingerprinting, triangulation fusing multiple measurements, combination of wireless positioning with pedestrian dead reckoning (PDR), and cooperative localization. Next, we focus on the location determination approaches that fuse spatial contexts, namely, map matching, landmark fusion, and spatial model-aided methods. Finally, we present the directions for future research.", "title": "" }, { "docid": "a74aef75f5b1d5bc44da2f6d2c9284cf", "text": "In this paper, we define irregular bipolar fuzzy graphs and its various classifications. Size of regular bipolar fuzzy graphs is derived. The relation between highly and neighbourly irregular bipolar fuzzy graphs are established. Some basic theorems related to the stated graphs have also been presented.", "title": "" }, { "docid": "2a1d77e0c5fe71c3c5eab995828ef113", "text": "Local modular control (LMC) is an approach to the supervisory control theory (SCT) of discrete-event systems that exploits the modularity of plant and specifications. Recently, distinguishers and approximations have been associated with SCT to simplify modeling and reduce synthesis effort. This paper shows how advantages from LMC, distinguishers, and approximations can be combined. Sufficient conditions are presented to guarantee that local supervisors computed by our approach lead to the same global closed-loop behavior as the solution obtained with the original LMC, in which the modeling is entirely handled without distinguishers. A further contribution presents a modular way to design distinguishers and a straightforward way to construct approximations to be used in local synthesis. An example of manufacturing system illustrates our approach. Note to Practitioners—Distinguishers and approximations are alternatives to simplify modeling and reduce synthesis cost in SCT, grounded on the idea of event-refinements. However, this approach may entangle the modular structure of a plant, so that LMC does not keep the same efficiency. This paper shows how distinguishers and approximations can be locally combined such that synthesis cost is reduced and LMC advantages are preserved.", "title": "" }, { "docid": "a41928a11259124e6e37b026450cb287", "text": "A compact planar monopole UWB antenna with quadruple band-notched characteristics is analyzed and presented. By introducing a C-shaped slot, nested C-shaped slot in the radiating patch and U-shaped slot in the feed line, quadruple band-notched characteristics are achieved at frequencies of 2.5, 3.7, 5.8 and 8.2 GHz. The proposed antenna has been fabricated and tested. Measured impedance bandwidth of the antenna is 2.35–12 GHz, which covers Bluetooth and UWB band, for VSWR < 2 and also has four stop bands of 2.44–2.77, 3.42–3.97, 5.45–5.98 and 8–8.68GHz, for VSWR > 2, for rejecting 2.5/3.5GHz WiMAX, WLAN and ITU 8 GHz band signals, respectively. The average gain of this antenna is 4.30 dBi with a variation of ±1.8 dBi over the whole impedance bandwidth. Significant gain reduction over the rejected band is also observed. The antenna shows good omnidirectional radiation patterns in the passband with a compact size of 40 mm× 34mm.", "title": "" }, { "docid": "0f6aaec52e4f7f299a711296992b5dba", "text": "This paper presents a simple and efficient method for online signature verification. The technique is based on a feature set comprising of several histograms that can be computed efficiently given a raw data sequence of an online signature. The features which are represented by a fixed-length vector can not be used to reconstruct the original signature, thereby providing privacy to the user's biometric trait in case the stored template is compromised. To test the verification performance of the proposed technique, several experiments were conducted on the well known MCYT-100 and SUSIG datasets including both skilled forgeries and random forgeries. Experimental results demonstrate that the performance of the proposed technique is comparable to state-of-art algorithms despite its simplicity and efficiency.", "title": "" }, { "docid": "504a2509c66a9a69239a8055ea64a1d4", "text": "Business intelligence (BI) has become the top priority for many organizations who have implemented BI solutions to improve their decision-making process. Yet, not all BI initiatives have fulfilled the expectations. We suggest that one of the reasons for failure is the lack of an understanding of the critical factors that define the success of BI applications, and that BI capabilities are among those critical factors. We present findings from a survey of 116 BI professionals that provides a snapshot of user satisfaction with various BI capabilities and the relationship between these capabilities and user satisfaction with BI. Our findings suggest that users are generally satisfied with BI overall and with BI capabilities. However, the BI capabilities with which they are most satisfied are not necessarily the ones that are the most strongly related to BI success. Of the five capabilities that were the most highly correlated with overall satisfaction with BI, only one was specifically related to data. Another interesting finding implies that, although users are not highly satisfied with the level of interaction of BI with other systems, this capability is highly correlated with BI success. Implications of these findings for the successful use and management of BI are discussed. Copyright © 2012 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "e992ffd4ebbf9d096de092caf476e37d", "text": "If self-regulation conforms to an energy or strength model, then self-control should be impaired by prior exertion. In Study 1, trying to regulate one's emotional response to an upsetting movie was followed by a decrease in physical stamina. In Study 2, suppressing forbidden thoughts led to a subsequent tendency to give up quickly on unsolvable anagrams. In Study 3, suppressing thoughts impaired subsequent efforts to control the expression of amusement and enjoyment. In Study 4, autobiographical accounts of successful versus failed emotional control linked prior regulatory demands and fatigue to self-regulatory failure. A strength model of self-regulation fits the data better than activation, priming, skill, or constant capacity models of self-regulation.", "title": "" }, { "docid": "eebeb59c737839e82ecc20a748b12c6b", "text": "We present SWARM, a wearable affective technology designed to help a user to reflect on their own emotional state, modify their affect, and interpret the emotional states of others. SWARM aims for a universal design (inclusive of people with various disabilities), with a focus on modular actuation components to accommodate users' sensory capabilities and preferences, and a scarf form-factor meant to reduce the stigma of accessible technologies through a fashionable embodiment. Using an iterative, user-centered approach, we present SWARM's design. Additionally, we contribute findings for communicating emotions through technology actuations, wearable design techniques (including a modular soft circuit design technique that fuses conductive fabric with actuation components), and universal design considerations for wearable technology.", "title": "" } ]
scidocsrr
10342e59bdf53b830eaaf252d7642fff
Shoulder actuation mechanisms for arm rehabilitation exoskeletons
[ { "docid": "9fa46e75dc28961fe3ce6fadd179cff7", "text": "Task-oriented repetitive movements can improve motor recovery in patients with neurological or orthopaedic lesions. The application of robotics can serve to assist, enhance, evaluate, and document neurological and orthopaedic rehabilitation. ARMin II is the second prototype of a robot for arm therapy applicable to the training of activities of daily living. ARMin II has a semi-exoskeletal structure with seven active degrees of freedom (two of them coupled), five adjustable segments to fit in with different patient sizes, and is equipped with position and force sensors. The mechanical structure, the actuators and the sensors of the robot are optimized for patient-cooperative control strategies based on impedance and admittance architectures. This paper describes the mechanical structure and kinematics of ARMin II.", "title": "" }, { "docid": "674822f977d8cb4f0ad899307594fa19", "text": "This paper introduces a novel kinematic design paradigm for ergonomic human machine interaction. Goals for optimal design are formulated generically and applied to the mechanical design of an upper-arm exoskeleton. A nine degree-of-freedom (DOF) model of the human arm kinematics is presented and used to develop, test, and optimize the kinematic structure of an human arm interfacing exoskeleton. The resulting device can interact with an unprecedented portion of the natural limb workspace, including motions in the shoulder-girdle, shoulder, elbow, and the wrist. The exoskeleton does not require alignment to the human joint axes, yet is able to actuate each DOF of our redundant limb unambiguously and without reaching into singularities. The device is comfortable to wear and does not create residual forces if misalignments exist. Implemented in a rehabilitation robot, the design features of the exoskeleton could enable longer lasting training sessions, training of fully natural tasks such as activities of daily living and shorter dress-on and dress-off times. Results from inter-subject experiments with a prototype are presented, that verify usability over the entire workspace of the human arm, including shoulder and shoulder girdle", "title": "" } ]
[ { "docid": "6eabed5f23024c52ce7d7f5b7ca99f15", "text": "As the size of Deep Neural Networks (DNNs) continues to grow to increase accuracy and solve more complex problems, their energy footprint also scales. Weight pruning reduces DNN model size and the computation by removing redundant weights. However, we implemented weight pruning for several popular networks on a variety of hardware platforms and observed surprising results. For many networks, the network sparsity caused by weight pruning will actually hurt the overall performance despite large reductions in the model size and required multiply-accumulate operations. Also, encoding the sparse format of pruned networks incurs additional storage space overhead. To overcome these challenges, we propose Scalpel that customizes DNN pruning to the underlying hardware by matching the pruned network structure to the data-parallel hardware organization. Scalpel consists of two techniques: SIMD-aware weight pruning and node pruning. For low-parallelism hardware (e.g., microcontroller), SIMD-aware weight pruning maintains weights in aligned fixed-size groups to fully utilize the SIMD units. For high-parallelism hardware (e.g., GPU), node pruning removes redundant nodes, not redundant weights, thereby reducing computation without sacrificing the dense matrix format. For hardware with moderate parallelism (e.g., desktop CPU), SIMD-aware weight pruning and node pruning are synergistically applied together. Across the microcontroller, CPU and GPU, Scalpel achieves mean speedups of 3.54x, 2.61x, and 1.25x while reducing the model sizes by 88%, 82%, and 53%. In comparison, traditional weight pruning achieves mean speedups of 1.90x, 1.06x, 0.41x across the three platforms.", "title": "" }, { "docid": "cb908215944ce8dbd6934aaa8b024abc", "text": "Advances in computing power, natural language processing, and digitization of text now make it possible to study a culture’s evolution through its texts using a ‘big data’ lens. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories and forming patterns that are meaningful to us. Here, by classifying the emotional arcs for a filtered subset of 1,327 stories from Project Gutenberg’s fiction collection, we find a set of six core emotional arcs which form the essential building blocks of complex emotional trajectories. We strengthen our findings by separately applying matrix decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads.", "title": "" }, { "docid": "efa566cdd4f5fa3cb12a775126377cb5", "text": "This paper deals with the electromagnetic emissions of integrated circuits. In particular, four measurement techniques to evaluate integrated circuit conducted emissions are described in detail and they are employed for the measurement of the power supply conducted emission delivered by a simple integrated circuit composed of six synchronous switching drivers. Experimental results obtained by employing such measurement methods are presented and the influence of each test setup on the measured quantities is discussed.", "title": "" }, { "docid": "8016e80e506dcbae5c85fdabf1304719", "text": "We introduce globally normalized convolutional neural networks for joint entity classification and relation extraction. In particular, we propose a way to utilize a linear-chain conditional random field output layer for predicting entity types and relations between entities at the same time. Our experiments show that global normalization outperforms a locally normalized softmax layer on a benchmark dataset.", "title": "" }, { "docid": "e7d955c48e5bdd86ae21a61fcd130ae2", "text": "We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs—both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning.", "title": "" }, { "docid": "7100fea85ba7c88f0281f11e7ddc04a9", "text": "This paper reports the spoof surface plasmons polaritons (SSPPs) based multi-band bandpass filter. An efficient back to back transition from Quasi TEM mode of microstrip line to SSPP mode has been designed by etching a gradient corrugated structure on the metal strip; while keeping ground plane unaltered. SSPP wave is found to be highly confined within the teeth part of corrugation. Complementary split ring resonator has been etched in the ground plane to obtained multiband bandpass filter response. Excellent conversion from QTEM mode to SSPP mode has been observed.", "title": "" }, { "docid": "113373d6a9936e192e5c3ad016146777", "text": "This paper examines published data to develop a model for detecting factors associated with false financia l statements (FFS). Most false financial statements in Greece can be identified on the basis of the quantity and content of the qualification s in the reports filed by the auditors on the accounts. A sample of a total of 76 firms includes 38 with FFS and 38 non-FFS. Ten financial variables are selected for examination as potential predictors of FFS. Univariate and multivariate statistica l techniques such as logistic regression are used to develop a model to identify factors associated with FFS. The model is accurate in classifying the total sample correctly with accuracy rates exceeding 84 per cent. The results therefore demonstrate that the models function effectively in detecting FFS and could be of assistance to auditors, both internal and external, to taxation and other state authorities and to the banking system. the empirical results and discussion obtained using univariate tests and multivariate logistic regression analysis. Finally, in the fifth section come the concluding remarks.", "title": "" }, { "docid": "8b701007a5c7ffd70ced2f244a2b6ee9", "text": "In-depth interviews and focus group discussions were conducted to inform the development of an instrument to measure the health-related quality of life of children living with HIV. The QOL-CHAI instrument consists of four generic core scales of the \"Pediatric Quality of Life Inventory\" and two HIV-targeted scales-\"symptoms\" and \"discrimination.\" A piloting exercise involving groups of children living with HIV and HIV-negative children born to HIV-infected parents provided evidence for the acceptable psychometric properties and usability of the instrument. It is expected that the QOL-CHAI can serve well as a brief, standardized, and culturally appropriate instrument for assessing health-related quality of life of Indian children living with HIV.", "title": "" }, { "docid": "41b16e29baef6f27a03c774657811d5e", "text": "Pharmacokinetics is a fundamental scientific discipline that underpins applied therapeutics. Patients need to be prescribed appropriate medicines for a clinical condition. The medicine is chosen on the basis of an evidencebased approach to clinical practice and assured to be compatible with any other medicines or alternative therapies the patient may be taking. The design of a dosage regimen is dependent on a basic understanding of the drug use process (DUP). When faced with a patient who shows specific clinical signs and symptoms, pharmacists must always ask a fundamental question: ‘Is this patient suffering from a drug-related problem?’ Once this issue is evaluated and a clinical diagnosis is available, the pharmacist can apply the DUP to ensure that the patient is prescribed an appropriate medication regimen, that the patient understands the therapy prescribed, and that an agreed concordance plan is achieved. Pharmacists using the DUP consider:", "title": "" }, { "docid": "04b14e2795afc0faaa376bc17ead0aaf", "text": "In this paper, an integrated MEMS gyroscope array method composed of two levels of optimal filtering was designed to improve the accuracy of gyroscopes. In the firstlevel filtering, several identical gyroscopes were combined through Kalman filtering into a single effective device, whose performance could surpass that of any individual sensor. The key of the performance improving lies in the optimal estimation of the random noise sources such as rate random walk and angular random walk for compensating the measurement values. Especially, the cross correlation between the noises from different gyroscopes of the same type was used to establish the system noise covariance matrix and the measurement noise covariance matrix for Kalman filtering to improve the performance further. Secondly, an integrated Kalman filter with six states was designed to further improve the accuracy with the aid of external sensors such as magnetometers and accelerometers in attitude determination. Experiments showed that three gyroscopes with a bias drift of 35 degree per hour could be combined into a virtual gyroscope with a drift of 1.07 degree per hour through the first-level filter, and the bias drift was reduced to 0.53 degree per hour after the second-level filtering. It proved that the proposed integrated MEMS gyroscope array is capable of improving the accuracy of the MEMS gyroscopes, which provides the possibility of using these low cost MEMS sensors in high-accuracy application areas.", "title": "" }, { "docid": "babac76166921edd1f29a2818380cc5c", "text": "Content-Centric Networking (CCN) is an emerging (inter-)networking architecture with the goal of becoming an alternative to the IP-based Internet. To be considered a viable candidate, CCN must at least have parity with existing solutions for confidential and anonymous communication, e.g., TLS, tcpcrypt, and Tor. ANDa̅NA (Anonymous Named Data Networking Application) was the first proposed solution that addressed the lack of anonymous communication in Named Data Networking (NDN)-a variant of CCN. However, its design and implementation led to performance issues that hinder practical use. In this paper we introduce AC3N: Anonymous Communication for Content-Centric Networking. AC3N is an evolution of the ANDa̅NA system that supports high-throughput and low-latency anonymous content retrieval. We discuss the design and initial performance results of this new system.", "title": "" }, { "docid": "519ca18e1450581eb3a7387568dce7cf", "text": "This paper illustrates the design of a process compensated bias for asynchronous CML dividers for a low power, high performance LO divide chain operating at 4Ghz of input RF frequency. The divider chain provides division by 4,8,12,16,20, and 24. It provides a differential CML level signal for the in-loop modulated transmitter, and 25% duty cycle non-overlapping rail to rail waveforms for I/Q receiver for driving passive mixer. Asynchronous dividers have been used to realize divide by 3 and 5 with 50% duty cycle, quadrature outputs. All the CML dividers use a process compensated bias to compensate for load resistor variation and tail current variation using dual analog feedback loops. Frabricated in 180nm CMOS technology, the divider chain operate over industrial temperature range (−40 to 90°C), and provide outputs in 138–960Mhz range, consuming 2.2mA from 1.8V regulated supply at the highest output frequency.", "title": "" }, { "docid": "3e0f74c880165b5147864dfaa6a75c11", "text": "Traditional hollow metallic waveguide manufacturing techniques are readily capable of producing components with high-precision geometric tolerances, yet generally lack the ability to customize individual parts on demand or to deliver finished components with low lead times. This paper proposes a Rapid-Prototyping (RP) method for relatively low-loss millimeter-wave hollow waveguides produced using consumer-grade stere-olithographic (SLA) Additive Manufacturing (AM) technology, in conjunction with an electroless metallization process optimized for acrylate-based photopolymer substrates. To demonstrate the capabilities of this particular AM process, waveguide prototypes are fabricated for the W- and D-bands. The measured insertion loss at W-band is between 0.12 dB/in to 0.25 dB/in, corresponding to a mean value of 0.16 dB/in. To our knowledge, this is the lowest insertion loss figure presented to date, when compared to other W-Band AM waveguide designs reported in the literature. Printed D-band waveguide prototypes exhibit a transducer loss of 0.26 dB/in to 1.01 dB/in, with a corresponding mean value of 0.65 dB/in, which is similar performance to a commercial metal waveguide.", "title": "" }, { "docid": "456349752a4098bd450dcd3d6c1a7e3b", "text": "In October 2003, a group of multidisciplinary researchers convened at the Symposium on Next Generation Automatic Speech Recognition (ASR) to consider new directions in building ASR systems (Lee 2003). Although the workshop's goal of \" integrating multidisciplinary sources of knowledge, from acoustics, speech, linguistics, cognitive science, signal processing, human computer interaction, and computer science, into every stage of ASR component and system design \" is an important goal, there remains a divide among these communities that can only be addressed through the educational process. The book Introducing Speech and Language Processing by John Coleman represents a bold effort to educate students in speech science about some of the important methods used in speech and natural language processing (NLP). This book represents an important first step for forging effective collaborations with the speech and language processing communities. Coleman states in chapter 1 of his book that \" This is a first, basic, elementary and short textbook in speech and natural language processing for beginners with little or no previous experience of computer programming \" (page 2). Coleman targets the book at students in a variety of disciplines, including arts, humanities, linguistics, psychology, and speech science, as well as early science and engineering students who want a glimpse into natural language and speech processing. However, since it assumes prior knowledge of basic linguistics, the text is likely to be less accessible to traditional science and engineering students. Coleman's motivation for writing this book is that the currently available textbooks in NLP and speech require knowledge that students from more of a humanities background would not have (e.g., programming, signal processing). The author also astutely points out that there tends to be a divide between the areas of signal processing and computational linguistics, although in recent years with ubiquity of statistical modeling and machine learning techniques in both areas, this divide is becoming much smaller. The author's motivation for this book is excellent: \" a refusal to let the old sociological divide between arts and sciences stand in the way of a new wave of spoken language researchers with a foot in both camps \" (page 4). The textbook covers a variety of techniques in speech and natural language processing , along with computer programs implementing many of them in either C or Prolog, and it capitalizes on Coleman's insights from courses offered to graduate linguistics students. It comes with a companion CD …", "title": "" }, { "docid": "8c662416784ddaf8dae387926ba0b17c", "text": "Autoimmune reactions to vaccinations may rarely be induced in predisposed individuals by molecular mimicry or bystander activation mechanisms. Autoimmune reactions reliably considered vaccine-associated, include Guillain-Barré syndrome after 1976 swine influenza vaccine, immune thrombocytopenic purpura after measles/mumps/rubella vaccine, and myopericarditis after smallpox vaccination, whereas the suspected association between hepatitis B vaccine and multiple sclerosis has not been further confirmed, even though it has been recently reconsidered, and the one between childhood immunization and type 1 diabetes seems by now to be definitively gone down. Larger epidemiological studies are needed to obtain more reliable data in most suggested associations.", "title": "" }, { "docid": "a747dabd262bfa2442c6152844c7e6a1", "text": "We first give a comprehensive taxonomy of rouge access points (APs), which includes a new class of rouge APs never addressed in the literature before. Then, we propose an efficient rogue AP protection system termed as RAP for commodity Wi-Fi networks. In RAP, novel techniques are introduced to detect rouge APs and to improve network resilience. Our system has the following nice properties: i) it requires neither specialized hardware nor modification to existing standards; ii) the proposed mechanism can be integrated with an AP in a plugin manner; iii) it provides a cost-effective security enhancement to Wi-Fi networks by incorporating free but mature software tools; iv) it can protect the network from adversaries capable of using customized equipment and violating the IEEE 802.11 standard.", "title": "" }, { "docid": "f92e4ca37d29c1f564f155a783b1606c", "text": "If we are to believe the technology hype cycle, we are entering a new era of Cognitive Computing, enabled by advances in natural language processing, machine learning, and more broadly artificial intelligence. These advances, combined with evolutionary progress in areas such as knowledge representation, automated planning, user experience technologies, software-as-a-service and crowdsourcing, have the potential to transform many industries. In this paper, we discuss transformations of BPM that advances in the Cognitive Computing will bring. We focus on three of the most signficant aspects of this transformation, namely: (a) Cognitive Computing will enable ”knowledge acquisition at scale”, which will lead to a transformation in Knowledge-intensive Processes (KiP’s); (b) We envision a new process meta-model will emerge that is centered around a “Plan-Act-Learn” cycle; and (c) Cognitive Computing can enable learning about processes from implicit descriptions (at both designand run-time), opening opportunities for new levels of automation and business process support, for both traditional business processes and KiP’s. We use the term cognitive BPM to refer to a new BPM paradigm encompassing all aspects of BPM that are impacted and enabled by Cognitive Computing. We argue that a fundamental understanding of cognitive BPM requires a new research framing of the business process ecosystem. The paper presents a conceptual framework for cognitive BPM, a brief survey of state of the art in emerging areas of Cognitive BPM, and discussion of key directions for further research.", "title": "" }, { "docid": "8d7467bf868d3a75821aa8f4f7513312", "text": "Search on PCs has become less efficient than searching the Web due to the increasing amount of stored data. In this paper we present an innovative Desktop search solution, which relies on extracted metadata, context information as well as additional background information for improving Desktop search results. We also present a practical application of this approach — the extensible Beagle toolbox. To prove the validity of our approach, we conducted a series of experiments. By comparing our results against the ones of a regular Desktop search solution — Beagle — we show an improved quality in search and overall performance.", "title": "" }, { "docid": "3eb50289c3b28d2ce88052199d40bf8d", "text": "Transportation Problem is an important aspect which has been widely studied in Operations Research domain. It has been studied to simulate different real life problems. In particular, application of this Problem in NPHard Problems has a remarkable significance. In this Paper, we present a comparative study of Transportation Problem through Probabilistic and Fuzzy Uncertainties. Fuzzy Logic is a computational paradigm that generalizes classical two-valued logic for reasoning under uncertainty. In order to achieve this, the notation of membership in a set needs to become a matter of degree. By doing this we accomplish two things viz., (i) ease of describing human knowledge involving vague concepts and (ii) enhanced ability to develop cost-effective solution to real-world problem. The multi-valued nature of Fuzzy Sets allows handling uncertain and vague information. It is a model-less approach and a clever disguise of Probability Theory. We give comparative simulation results of both approaches and discuss the Computational Complexity. To the best of our knowledge, this is the first work on comparative study of Transportation Problem using Probabilistic and Fuzzy Uncertainties.", "title": "" } ]
scidocsrr
dbf1f495e74ac0d43e0ea7492f21240d
Knowledge-based identification of sleep stages based on two forehead electroencephalogram channels
[ { "docid": "eee5ffff364575afad1dcebbf169777b", "text": "In this paper, we proposed the multiclass support vector machine (SVM) with the error-correcting output codes for the multiclass electroencephalogram (EEG) signals classification problem. The probabilistic neural network (PNN) and multilayer perceptron neural network were also tested and benchmarked for their performance on the classification of the EEG signals. Decision making was performed in two stages: feature extraction by computing the wavelet coefficients and the Lyapunov exponents and classification using the classifiers trained on the extracted features. The purpose was to determine an optimum classification scheme for this problem and also to infer clues about the extracted features. Our research demonstrated that the wavelet coefficients and the Lyapunov exponents are the features which well represent the EEG signals and the multiclass SVM and PNN trained on these features achieved high classification accuracies", "title": "" } ]
[ { "docid": "20dbfb8e026c21c37b39780dce4e9a49", "text": "Homophily — our tendency to surround ourselves with others who share our perspectives and opinions about the world — is both a part of human nature and an organizing principle underpinning many of our digital social networks. However, when it comes to politics or culture, homophily can amplify tribal mindsets and produce “echo chambers” that degrade the quality, safety, and diversity of discourse online. While several studies have empirically proven this point, few have explored how making users aware of the extent and nature of their political echo chambers influences their subsequent beliefs and actions. In this paper, we introduce Social Mirror, a social network visualization tool that enables a sample of Twitter users to explore the politically-active parts of their social network. We use Social Mirror to recruit Twitter users with a prior history of political discourse to a randomized experiment where we evaluate the effects of different treatments on participants’ i) beliefs about their network connections, ii) the political diversity of who they choose to follow, and iii) the political alignment of the URLs they choose to share. While we see no effects on average political alignment of shared URLs, we find that recommending accounts of the opposite political ideology to follow reduces participants’ beliefs in the political homogeneity of their network connections but still enhances their connection diversity one week after treatment. Conversely, participants who enhance their belief in the political homogeneity of their Twitter connections have less diverse network connections 2-3 weeks after treatment. We explore the implications of these disconnects between beliefs and actions on future efforts to promote healthier exchanges in our digital public spheres.", "title": "" }, { "docid": "d5e573802d6519a8da402f2e66064372", "text": "Targeted cyberattacks play an increasingly significant role in disrupting the online social and economic model, not to mention the threat they pose to nation-states. A variety of components and techniques come together to bring about such attacks.", "title": "" }, { "docid": "3171587b5b4554d151694f41206bcb4e", "text": "Embedded systems are ubiquitous in society and can contain information that could be used in criminal cases for example in a serious road traffic accident where the car management systems could provide vital forensic information concerning the engine speed etc. A critical review of a number of methods and procedures for the analysis of embedded systems were compared against a ‘standard’ methodology for use in a Forensic Computing Investigation. A Unified Forensic Methodology (UFM) has been developed that is forensically sound and capable of dealing with the analysis of a wide variety of Embedded Systems.", "title": "" }, { "docid": "c918f662a60b0ccb36159cf2f0bd051e", "text": "Graph embedding is an e‚ective method to represent graph data in a low dimensional space for graph analytics. Most existing embedding algorithms typically focus on preserving the topological structure or minimizing the reconstruction errors of graph data, but they have mostly ignored the data distribution of the latent codes from the graphs, which o‰en results in inferior embedding in real-world graph data. In this paper, we propose a novel adversarial graph embedding framework for graph data. Œe framework encodes the topological structure and node content in a graph to a compact representation, on which a decoder is trained to reconstruct the graph structure. Furthermore, the latent representation is enforced to match a prior distribution via an adversarial training scheme. To learn a robust embedding, two variants of adversarial approaches, adversarially regularized graph autoencoder (ARGA) and adversarially regularized variational graph autoencoder (ARVGA), are developed. Experimental studies on real-world graphs validate our design and demonstrate that our algorithms outperform baselines by a wide margin in link prediction, graph clustering, and graph visualization tasks.", "title": "" }, { "docid": "2e32606df9b1750b9abb03d450051d16", "text": "This research investigates two major aspects of homeschooling. Factors determining parental motivations to homeschool and the determinants of the student achievement of home-educated children are identified. Original survey data from an organized group of homeschoolers is analyzed. Regression models are employed to predict parents’ motivations and their students’ standardized test achievement. Four sets of homeschooling motivations are identified. Academic and pedagogical concerns are most important, and it appears that the religious base of the movement is subsiding. Several major demographic variables have no impact upon parental motivations, indicating that this is a diverse group. Parents’ educational attainment and political identification are consistent predictors of their students’ achievement. Race and class—the two major divides in public education—are not significant determinants of standardized test achievement, suggesting that homeschooling is efficacious. It is concluded that homeschoolers are a heterogeneous population with varying and overlapping motivations.", "title": "" }, { "docid": "75368ca96b1b22d49b0601237031368d", "text": "We propose to move from Open Information Extraction (OIE) ahead to Open Knowledge Representation (OKR), aiming to represent information conveyed jointly in a set of texts in an open textbased manner. We do so by consolidating OIE extractions using entity and predicate coreference, while modeling information containment between coreferring elements via lexical entailment. We suggest that generating OKR structures can be a useful step in the NLP pipeline, to give semantic applications an easy handle on consolidated information across multiple texts.", "title": "" }, { "docid": "c7d3e5201926bc6c3932d5e0555e2f57", "text": "The application of theory to practice is multifaceted. It requires a nursing theory that is compatible with an institution's values and mission and that is easily understood and simple enough to guide practice. Comfort Theory was chosen because of its universality. The authors describe how Kolcaba's Comfort Theory was used by a not-for-profit New England hospital to provide a coherent and consistent pattern for enhancing care and promoting professional practice, as well as to serve as a unifying framework for applying for Magnet Recognition Status.", "title": "" }, { "docid": "e771cc8c82e3b67fb5adac7bc3ca2932", "text": "The authors propose that the costs and benefits of directed forgetting in the list method result from an internal context change that occurs between the presentations of 2 lists in response to a \"forget\" instruction. In Experiment 1 of this study, costs and benefits akin to those found in directed forgetting were obtained in the absence of a forget instruction by a direct manipulation of cognitive context change. Experiment 2 of this study replicated those findings using a different cognitive context manipulation and investigated the effects of context reinstatement at the time of recall. Context reinstatement reduced the memorial costs and benefits of context change in the condition where context had been manipulated and in the standard forget condition. The results are consistent with a context change account of directed forgetting.", "title": "" }, { "docid": "9c82588d5e82df20e2156ca1bda91f09", "text": "Lean and simulation analysis are driven by the same objective, how to better design and improve processes making the companies more competitive. The adoption of lean has been widely spread in companies from public to private sectors and simulation is nowadays becoming more and more popular. Several authors have pointed out the benefits of combining simulation and lean, however, they are still rarely used together in practice. Optimization as an additional technique to this combination is even a more powerful approach especially when designing and improving complex processes with multiple conflicting objectives. This paper presents the mutual benefits that are gained when combining lean, simulation and optimization and how they overcome each other's limitations. A framework including the three concepts, some of the barriers for its implementation and a real-world industrial example are also described.", "title": "" }, { "docid": "327d071f71bf39bcd171f85746047a02", "text": "Advances in information and communication technologies have led to the emergence of Internet of Things (IoT). In the healthcare environment, the use of IoT technologies brings convenience to physicians and patients as they can be applied to various medical areas (such as constant real-time monitoring, patient information management, medical emergency management, blood information management, and health management). The radio-frequency identification (RFID) technology is one of the core technologies of IoT deployments in the healthcare environment. To satisfy the various security requirements of RFID technology in IoT, many RFID authentication schemes have been proposed in the past decade. Recently, elliptic curve cryptography (ECC)-based RFID authentication schemes have attracted a lot of attention and have been used in the healthcare environment. In this paper, we discuss the security requirements of RFID authentication schemes, and in particular, we present a review of ECC-based RFID authentication schemes in terms of performance and security. Although most of them cannot satisfy all security requirements and have satisfactory performance, we found that there are three recently proposed ECC-based authentication schemes suitable for the healthcare environment in terms of their performance and security.", "title": "" }, { "docid": "a180735616ded05900cda77be19fc787", "text": "Economically sustainable software systems must be able to cost-effectively evolve in response to changes in their environment, their usage profile, and business demands. However, in many software development projects, sustainability is treated as an afterthought, as developers are driven by time-to-market pressure and are often not educated to apply sustainability-improving techniques. While software engineering research and practice has suggested a large amount of such techniques, a holistic overview is missing and the effectiveness of individual techniques is often not sufficiently validated. On this behalf we created a catalog of “software sustainability guidelines” to support project managers, software architects, and developers during system design, development, operation, and maintenance. This paper describes how we derived these guidelines and how we applied selected techniques from them in two industrial case studies. We report several lessons learned about sustainable software development.", "title": "" }, { "docid": "932b189b21703a4c50399f27395f37a6", "text": "An ultra-low power wake-up receiver for body channel communication (BCC) is implemented in 0.13 μm CMOS process. The proposed wake-up receiver uses the injection-locking ring-oscillator (ILRO) to replace the RF amplifier with low power consumption. Through the ILRO, the frequency modulated input signal is converted to the full swing rectangular signal which is directly demodulated by the following low power PLL based FSK demodulator. In addition, the relaxed sensitivity and selectivity requirement by the good channel quality of the BCC reduces the power consumption of the receiver. As a result, the proposed wake-up receiver achieves a sensitivity of -55.2 dbm at a data rate of 200 kbps while consuming only 39 μW from the 0.7 V supply.", "title": "" }, { "docid": "e18a8e3622ae85763c729bd2844ce14c", "text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.05.028 ⇑ Corresponding author. E-mail address: dgil@dtic.ua.es (D. Gil). 1 These authors equally contributed to this work. Fertility rates have dramatically decreased in the last two decades, especially in men. It has been described that environmental factors, as well as life habits, may affect semen quality. Artificial intelligence techniques are now an emerging methodology as decision support systems in medicine. In this paper we compare three artificial intelligence techniques, decision trees, Multilayer Perceptron and Support Vector Machines, in order to evaluate their performance in the prediction of the seminal quality from the data of the environmental factors and lifestyle. To do that we collect data by a normalized questionnaire from young healthy volunteers and then, we use the results of a semen analysis to asses the accuracy in the prediction of the three classification methods mentioned above. The results show that Multilayer Perceptron and Support Vector Machines show the highest accuracy, with prediction accuracy values of 86% for some of the seminal parameters. In contrast decision trees provide a visual and illustrative approach that can compensate the slightly lower accuracy obtained. In conclusion artificial intelligence methods are a useful tool in order to predict the seminal profile of an individual from the environmental factors and life habits. From the studied methods, Multilayer Perceptron and Support Vector Machines are the most accurate in the prediction. Therefore these tools, together with the visual help that decision trees offer, are the suggested methods to be included in the evaluation of the infertile patient. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "107960c3c2e714804133f5918ac03b74", "text": "This paper reports on a data-driven motion planning approach for interaction-aware, socially-compliant robot navigation among human agents. Autonomous mobile robots navigating in workspaces shared with human agents require motion planning techniques providing seamless integration and smooth navigation in such. Smooth integration in mixed scenarios calls for two abilities of the robot: predicting actions of others and acting predictably for them. The former requirement requests trainable models of agent behaviors in order to accurately forecast their actions in the future, taking into account their reaction on the robot's decisions. A human-like navigation style of the robot facilitates other agents-most likely not aware of the underlying planning technique applied-to predict the robot motion vice versa, resulting in smoother joint navigation. The approach presented in this paper is based on a feature-based maximum entropy model and is able to guide a robot in an unstructured, real-world environment. The model is trained to predict joint behavior of heterogeneous groups of agents from onboard data of a mobile platform. We evaluate the benefit of interaction-aware motion planning in a realistic public setting with a total distance traveled of over 4 km. Interestingly the motion models learned from human-human interaction did not hold for robot-human interaction, due to the high attention and interest of pedestrians in testing basic braking functionality of the robot.", "title": "" }, { "docid": "5781bae1fdda2d2acc87102960dab3ed", "text": "Several static analysis tools, such as Splint or FindBugs, have been proposed to the software development community to help detect security vulnerabilities or bad programming practices. However, the adoption of these tools is hindered by their high false positive rates. If the false positive rate is too high, developers may get acclimated to violation reports from these tools, causing concrete and severe bugs being overlooked. Fortunately, some violations are actually addressed and resolved by developers. We claim that those violations that are recurrently fixed are likely to be true positives, and an automated approach can learn to repair similar unseen violations. However, there is lack of a systematic way to investigate the distributions on existing violations and fixed ones in the wild, that can provide insights into prioritizing violations for developers, and an effective way to mine code and fix patterns which can help developers easily understand the reasons of leading violations and how to fix them. In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair.", "title": "" }, { "docid": "50d63f05e453468f8e5234910e3d86d1", "text": "0167-8655/$ see front matter 2011 Published by doi:10.1016/j.patrec.2011.08.019 ⇑ Corresponding author. Tel.: +44 (0) 2075940990; E-mail addresses: gordon.ross03@ic.ac.uk, gr203@i ic.ac.uk (N.M. Adams), d.tasoulis@ic.ac.uk (D.K. Tas Hand). Classifying streaming data requires the development of methods which are computationally efficient and able to cope with changes in the underlying distribution of the stream, a phenomenon known in the literature as concept drift. We propose a new method for detecting concept drift which uses an exponentially weighted moving average (EWMA) chart to monitor the misclassification rate of an streaming classifier. Our approach is modular and can hence be run in parallel with any underlying classifier to provide an additional layer of concept drift detection. Moreover our method is computationally efficient with overhead O(1) and works in a fully online manner with no need to store data points in memory. Unlike many existing approaches to concept drift detection, our method allows the rate of false positive detections to be controlled and kept constant over time. 2011 Published by Elsevier B.V.", "title": "" }, { "docid": "718cdb5e4d0bdd396a0385c10664c22b", "text": "In recent years, convex optimization has become a computational tool of central importance in engineering, thanks to it's ability to solve very large, practical engineering problems reliably and efficiently. The goal of this tutorial is to give an overview of the basic concepts of convex sets, functions and convex optimization problems, so that the reader can more readily recognize and formulate engineering problems using modern convex optimization. This tutorial coincides with the publication of the new book on convex optimization, by Boyd and Vandenberghe, who have made available a large amount of free course material and links to freely available code. These can be downloaded and used immediately by the audience both for self-study and to solve real problems.", "title": "" }, { "docid": "8bff866cf5c585401f410a30dcf998cd", "text": "We describe a ”log-bilinear” model that computes class probabilities by combining an input vector multiplicatively with a vector of binary latent variables. Even though the latent variables can take on exponentially many possible combinations of values, we can efficiently compute the exact probability of each class by marginalizing over the latent variables. This makes it possible to get the exact gradient of the log likelihood. The bilinear score-functions are defined using a three-dimensional weight tensor, and we show that factorizing this tensor allows the model to encode invariances inherent in a task by learning a dictionary of invariant basis functions. Experiments on a set of benchmark problems show that this fully probabilistic model can achieve classification performance that is competitive with (kernel) SVMs, backpropagation, and deep belief nets.", "title": "" }, { "docid": "f35e22d5ee51d8e83836337b3ab51754", "text": "SaaS companies generate revenues by charging recurring subscription fees for using their software services. The fast growth of SaaS companies is usually accompanied with huge upfront costs in marketing expenses targeted at their potential customers. Customer retention is a critical issue for SaaS companies because it takes twelve months on average to break-even with the expenses for a single customer. This study describes a methodology for helping SaaS companies manage their customer relationships. We investigated the time-dependent software feature usage data, for example, login numbers and comment numbers, to predict whether a customer would churn within the next three months. Our study compared model performance across four classification algorithms. The XGBoost model yielded the best results for identifying the most important software usage features and for classifying customers as either churn type or non-risky type. Our model achieved a 10-fold cross-validated mean AUC score of 0.7941. Companies can choose to move along the ROC curve to accommodate to their marketing capability. The feature importance output from the XGBoost model can facilitate SaaS companies in identifying the most significant software features to launch more effective marketing campaigns when facing prospective customers.", "title": "" }, { "docid": "5dde27787ee92c2e56729b25b9ca4311", "text": "The prefrontal cortex (PFC) subserves cognitive control: the ability to coordinate thoughts or actions in relation with internal goals. Its functional architecture, however, remains poorly understood. Using brain imaging in humans, we showed that the lateral PFC is organized as a cascade of executive processes from premotor to anterior PFC regions that control behavior according to stimuli, the present perceptual context, and the temporal episode in which stimuli occur, respectively. The results support an unified modular model of cognitive control that describes the overall functional organization of the human lateral PFC and has basic methodological and theoretical implications.", "title": "" } ]
scidocsrr
16ea2e00dc098cc1c71b4f810a20e172
Cyber-bullying taxonomies: Definition, forms, consequences and mitigation strategies
[ { "docid": "64bdb5647b7b05c96de8c0d8f6f00eed", "text": "Cyberbullying is a reality of the digital age. To address this phenomenon, it becomes imperative to understand exactly what cyberbullying is. Thus, establishing a workable and theoretically sound definition is essential. This article contributes to the existing literature in relation to the definition of cyberbullying. The specific elements of repetition, power imbalance, intention, and aggression, regarded as essential criteria of traditional face-to-face bullying, are considered in the cyber context. It is posited that the core bullying elements retain their importance and applicability in relation to cyberbullying. The element of repetition is in need of redefining, given the public nature of material in the online environment. In this article, a clear distinction between direct and indirect cyberbullying is made and a model definition of cyberbullying is offered. Overall, the analysis provided lends insight into how the essential bullying elements have evolved and should apply in our parallel cyber universe.", "title": "" }, { "docid": "117f529b96afc67e1a9ba3058f83049f", "text": "Data from 53 focus groups, which involved students from 10 to 18 years old, show that youngsters often interpret \"cyberbullying\" as \"Internet bullying\" and associate the phenomenon with a wide range of practices. In order to be considered \"true\" cyberbullying, these practices must meet several criteria. They should be intended to hurt (by the perpetrator) and perceived as hurtful (by the victim); be part of a repetitive pattern of negative offline or online actions; and be performed in a relationship characterized by a power imbalance (based on \"real-life\" power criteria, such as physical strength or age, and/or on ICT-related criteria such as technological know-how and anonymity).", "title": "" }, { "docid": "06b0708250515510b8a3fc302045fe4b", "text": "While the subject of cyberbullying of children and adolescents has begun to be addressed, less attention and research have focused on cyberbullying in the workplace. Male-dominated workplaces such as manufacturing settings are found to have an increased risk of workplace bullying, but the prevalence of cyberbullying in this sector is not known. This exploratory study investigated the prevalence and methods of face-to-face bullying and cyberbullying of males at work. One hundred three surveys (a modified version of the revised Negative Acts Questionnaire [NAQ-R]) were returned from randomly selected members of the Australian Manufacturing Workers' Union (AMWU). The results showed that 34% of respondents were bullied face-to-face, and 10.7% were cyberbullied. All victims of cyberbullying also experienced face-to-face bullying. The implications for organizations' \"duty of care\" in regard to this new form of bullying are indicated.", "title": "" } ]
[ { "docid": "9fac5ac1de2ae70964bdb05643d41a68", "text": "A long-standing goal in the field of artificial intelligence is to develop agents that can perceive and understand the rich visual world around us and who can communicate with us about it in natural language. Significant strides have been made towards this goal over the last few years due to simultaneous advances in computing infrastructure, data gathering and algorithms. The progress has been especially rapid in visual recognition, where computers can now classify images into categories with a performance that rivals that of humans, or even surpasses it in some cases such as classifying breeds of dogs. However, despite much encouraging progress, most of the advances in visual recognition still take place in the context of assigning one or a few discrete labels to an image (e.g. person, boat, keyboard, etc.). In this dissertation we develop models and techniques that allow us to connect the domain of visual data and the domain of natural language utterances, enabling translation between elements of the two domains. In particular, first we introduce a model that embeds both images and sentences into a common multi-modal embedding space. This space then allows us to identify images that depict an arbitrary sentence description and conversely, we can identify sentences that describe any image. Second, we develop an image captioning model that takes an image and directly generates a sentence description without being constrained a finite collection of human-written sentences to choose from. Lastly, we describe a model that can take an image and both localize and describe all if its salient parts. We demonstrate that this model can also be used backwards to take any arbitrary description (e.g. white tennis shoes) and e ciently localize the described concept in a large collection of images. We argue that these models, the techniques they take advantage of internally and the interactions they enable are a stepping stone towards artificial intelligence and that connecting images and natural language o↵ers many practical benefits and immediate valuable applications. From the modeling perspective, instead of designing and staging explicit algorithms to process images and sentences in complex processing pipelines, our contribution lies in the design of hybrid convolutional and recurrent neural network architectures that connect visual data and natural language utterances with a single network. Therefore, the computational processing of images,", "title": "" }, { "docid": "a14ac26274448e0a7ecafdecae4830f9", "text": "Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.", "title": "" }, { "docid": "6fed39aba9c72f21c553a82d97a2cb23", "text": "This paper presents a position sensorless closed loop control of a switched reluctance linear motor. The aim of the proposed control is to damp the position of the studied motor. Indeed, the position oscillations can harm some applications requiring high position precision. Moreover, they can induce the linear switched reluctance motor to an erratic working. The proposed control solution is based on back Electromotive Forces which give information about the oscillatory behaviour of the studied motor and avoid the use of a cumbersome and expensive position linear sensor. The determination of the designed control law parameters was based on the singular perturbation theory. The efficiency of the proposed control solution was proven by simulations and experimental tests.", "title": "" }, { "docid": "273153d0cf32162acb48ed989fa6d713", "text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" }, { "docid": "327042fae16e69b15a4e8ea857ccdb18", "text": "Do countries with lower policy-induced barriers to international trade grow faster, once other relevant country characteristics are controlled for? There exists a large empirical literature providing an affirmative answer to this question. We argue that methodological problems with the empirical strategies employed in this literature leave the results open to diverse interpretations. In many cases, the indicators of \"openness\" used by researchers are poor measures of trade barriers or are highly correlated with other sources of bad economic performance. In other cases, the methods used to ascertain the link between trade policy and growth have serious shortcomings. Papers that we review include Dollar (1992), Ben-David (1993), Sachs and Warner (1995), and Edwards (1998). We find little evidence that open trade policies--in the sense of lower tariff and non-tariff barriers to trade--are significantly associated with economic growth. Francisco Rodríguez Dani R odrik Department of Economics John F. Kennedy School of Government University of Maryland Harvard University College Park, MD 20742 79 Kennedy Street Cambridge, MA 02138 Phone: (301) 405-3480 Phone: (617) 495-9454 Fax: (301) 405-3542 Fax: (617) 496-5747 TRADE POLICY AND ECONOMIC GROWTH: A SKEPTIC'S GUIDE TO THE CROSS-NATIONAL EVIDENCE \"It isn't what we don't know that kills us. It's what we know that ain't so.\" -Mark Twain", "title": "" }, { "docid": "5b545c14a8784383b8d921eb27991749", "text": "In this chapter, neural networks are used to predict the future stock prices and develop a suitable trading system. Wavelet analysis is used to de-noise the time series and the results are compared with the raw time series prediction without wavelet de-noising. Standard and Poor 500 (S&P 500) is used in experiments. We use a gradual data sub-sampling technique, i.e., training the network mostly with recent data, but without neglecting past data. In addition, effects of NASDAQ 100 are studied on prediction of S&P 500. A daily trading strategy is employed to buy/sell according to the predicted prices and to calculate the directional efficiency and the rate of returns for different periods. There are numerous exchange traded funds (ETF’s), which attempt to replicate the performance of S&P 500 by holding the same stocks in the same proportions as the index, and therefore, giving the same percentage returns as S&P 500. Therefore, this study can be used to help invest in any of the various ETFs, which replicates the performance of S&P 500. The experimental results show that neural networks, with appropriate training and input data, can be used to achieve high profits by investing in ETFs based on S&P 500.", "title": "" }, { "docid": "ed012eec144e6f2f0257141404563928", "text": "This paper presents a new direct active and reactive power control (DPC) of grid-connected doubly fed induction generator (DFIG)-based wind turbine systems. The proposed DPC strategy employs a nonlinear sliding-mode control scheme to directly calculate the required rotor control voltage so as to eliminate the instantaneous errors of active and reactive powers without involving any synchronous coordinate transformations. Thus, no extra current control loops are required, thereby simplifying the system design and enhancing the transient performance. Constant converter switching frequency is achieved by using space vector modulation, which eases the designs of the power converter and the ac harmonic filter. Simulation results on a 2-MW grid-connected DFIG system are provided and compared with those of classic voltage-oriented vector control (VC) and conventional lookup table (LUT) DPC. The proposed DPC provides enhanced transient performance similar to the LUT DPC and keeps the steady-state harmonic spectra at the same level as the VC strategy.", "title": "" }, { "docid": "900190a904f64de86745048eabc630b8", "text": "A new methodology for designing and implementing high-efficiency broadband Class-E power amplifiers (PAs) using high-order low-pass filter-prototype is proposed in this paper. A GaN transistor is used in this work, which is carefully modeled and characterized to prescribe the optimal output impedance for the broadband Class-E operation. A sixth-order low-pass filter-matching network is designed and implemented for the output matching, which provides optimized fundamental and harmonic impedances within an octave bandwidth (L-band). Simulation and experimental results show that an optimal Class-E PA is realized from 1.2 to 2 GHz (50%) with a measured efficiency of 80%-89%, which is the highest reported today for such a bandwidth. An overall PA bandwidth of 0.9-2.2 GHz (84%) is measured with 10-20-W output power, 10-13-dB gain, and 63%-89% efficiency throughout the band. Furthermore, the Class-E PA is characterized through measurements using constant-envelop global system for mobile communications signals, indicating a favorable adjacent channel power ratio from -40 to -50 dBc within the entire bandwidth.", "title": "" }, { "docid": "f622860032b9a4dd054082be0741f18d", "text": "Full Metal Jacket is a general-purpose visual dataflow language currently being developed on top of Emblem, a Lisp dialect strongly influenced by Common Lisp but smaller and more type-aware, and with support for CLOS-style object orientation, graphics, event handling and multi-threading. Methods in Full Metal Jacket Jacket are directed acyclic graphs. Data arriving at ingates from the calling method flows along edges through vertices, at which it gets transformed by applying Emblem functions or methods, or methods defined in Full Metal Jacket, before it finally arrives at outgates where it is propagated back upwards to the calling method. The principal difference between Full Metal Jacket and existing visual dataflow languages such as Prograph is that Full Metal Jacket is a pure dataflow language, with no special syntax being required for control constructs such as loops or conditionals, which resemble ordinary methods except in the number of times they generate outputs. This uniform syntax means that, like Lisp and Prolog, methods in Full Metal Jacket are themselves data structures and can be manipulated as such.", "title": "" }, { "docid": "00bcab0936aa36b94b67ce38fc89cd2e", "text": "Introduction Forth interpreters can utilize several techniques for implementing threaded code. We will classify these techniques to better understand the mechanisms underlying \"threaded interpretive languages\", or TILs. One basic assumption we will make is that the TIL is implemented on a typical microprocessor (which is usually the case). The following are the elements of any threaded interpretive language. These must be designed together to make the interpreter work.", "title": "" }, { "docid": "92207faaa63e33f51a5c924dbbd4855a", "text": "A significant body of research, spanning approximately the last 25 years, has focused upon the task of developing a better understanding of tumor growth through the use of in vitro mathematical models. Although such models are useful for simulation, in vivo growth differs in significant ways due to the variety of competing biological, biochemical, and mechanical factors present in a living biological system. An in vivo, macroscopic, primary brain tumor growth model is developed, incorporating previous in vitro growth pattern research as well as scientific investigations into the biological and biochemical factors that affect in vivo neoplastic growth. The tumor growth potential model presents an integrated, universal framework that can be employed to predict the direction and extent of spread of a primary brain tumor with respect to time for a specific patient. This framework may be extended as necessary to include the results of current and future research into parameters affecting neoplastic proliferation. The patient-specific primary brain tumor growth model is expected to have multiple clinical uses, including: predictive modeling, tumor boundary delineation, growth pattern research, improved radiation surgery planning, and expert diagnostic assistance.", "title": "" }, { "docid": "7631efaa3ee171a320bd6173a3cfc3fd", "text": "In his classic article D’Amico states: “When canines are in normal interlocking position, the lateral and forward movement is limited so that when an attempt is made to move the mandible laterally or forward, there is an involuntary reaction when the canines come in contact. The reaction is an immediate break in the tension of the temporal and masseter muscles, thus reducing the magnitude of the applied force. Regardless of how hard the individual tries to tense these muscles, as long as the canines are in contact, it is impossible for these muscles to assume full tension.” He continues: “The length of the roots of the canines and the anatomical structure of the supporting alveolar process gives testimony to nature’s intention as to the function intended. What may appear as trauma as they come in contact is not trauma at all, because when contact is made, muscular tension is involuntarily reduced, thus reducing the magnitude of applied force.”", "title": "" }, { "docid": "6c8b83e0e02e5c0230d57e4885d27e02", "text": "Contemporary conceptions of physical education pedagogy stress the importance of considering students’ physical, affective, and cognitive developmental states in developing curricula (Aschebrock, 1999; Crum, 1994; Grineski, 1996; Humel, 2000; Hummel & Balz, 1995; Jones & Ward, 1998; Kurz, 1995; Siedentop, 1996; Virgilio, 2000). Sport and physical activity preference is one variable that is likely to change with development. Including activities preferred by girls and boys in physical education curricula could produce several benefits, including greater involvement in lessons and increased enjoyment of physical education (Derner, 1994; Greenwood, Stillwell, & Byars, 2000; Knitt et al., 2000; Lee, Fredenburg, Belcher, & Cleveland, 1999; Sass H. & Sass I., 1986; Strand & Scatling, 1994; Volke, Poszony, & Stumpf, 1985). These are significant goals, because preference for physical activity and enjoyment of physical education are important predictors for overall physical activity participation (Sallis et al., 1999a, b). Although physical education curricula should be based on more than simply students’ preferences, student preferences can inform the design of physical education, other schoolbased physical activity programs, and programs sponsored by other agencies. Young people’s physical activity and sport preferences are likely to vary by age, sex, socio-economic status and nationality. Although several studies have been conducted over many years (Greller & Cochran, 1995; Hoffman & Harris, 2000; Kotonski-Immig, 1994; Lamprecht, Ruschetti, & Stamm, 1991; Strand & Scatling, 1994; Taks, Renson, & Vanreusel, 1991; Telama, 1978; Walton et al., 1999), current understanding of children’s preferences in specific sports and movement activities is limited. One of the main limitations is the cross-sectional nature of the data, so the stability of sport and physical activity preferences over time is not known. The main aim of the present research is to describe the levels and trends in the development of sport and physical activity preferences in girls and boys over a period of five years, from the age of 10 to 14. Further, the study aims to establish the stability of preferences over time.", "title": "" }, { "docid": "9b2066a48425cee0d2e31a48e13e5456", "text": "© 2013 Emerenciano et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Biofloc Technology (BFT): A Review for Aquaculture Application and Animal Food Industry", "title": "" }, { "docid": "5585cc22a0af9cf00656ac04b14ade5a", "text": "Side-channel attacks pose a critical threat to the deployment of secure embedded systems. Differential-power analysis is a technique relying on measuring the power consumption of device while it computes a cryptographic primitive, and extracting the secret information from it exploiting the knowledge of the operations involving the key. There is no open literature describing how to properly employ Digital Signal Processing (DSP) techniques in order to improve the effectiveness of the attacks. This paper presents a pre-processing technique based on DSP, reducing the number of traces needed to perform an attack by an order of magnitude with respect to the results obtained with raw datasets, and puts it into practical use attacking a commercial 32-bit software implementation of AES running on a Cortex-M3 CPU. The main contribution of this paper is proposing a leakage model for software implemented cryptographic primitives and an effective framework to extract it.", "title": "" }, { "docid": "b45608b866edf56dbafe633824719dd6", "text": "classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.", "title": "" }, { "docid": "165aa4bad30a95866be4aff878fbd2cf", "text": "This paper reviews some recent developments in digital currency, focusing on platform-sponsored currencies such as Facebook Credits. In a model of platform management, we find that it will not likely be profitable for such currencies to expand to become fully convertible competitors to state-sponsored currencies. JEL Classification: D42, E4, L51 Bank Classification: bank notes, economic models, payment clearing and settlement systems * Rotman School of Management, University of Toronto and NBER (Gans) and Bank of Canada (Halaburda). The views here are those of the authors and no responsibility for them should be attributed to the Bank of Canada. We thank participants at the NBER Economics of Digitization Conference, Warren Weber and Glen Weyl for helpful comments on an earlier draft of this paper. Please send any comments to joshua.gans@gmail.com.", "title": "" }, { "docid": "ce098e1e022235a2c322a231bff8da6c", "text": "In recent years, due to the development of three-dimensional scanning technology, the opportunities for real objects to be three-dimensionally measured, taken into the PC as point cloud data, and used for various contents are increasing. However, the point cloud data obtained by three-dimensional scanning has many problems such as data loss due to occlusion or the material of the object to be measured, and occurrence of noise. Therefore, it is necessary to edit the point cloud data obtained by scanning. Particularly, since the point cloud data obtained by scanning contains many data missing, it takes much time to fill holes. Therefore, we propose a method to automatically filling hole obtained by three-dimensional scanning. In our method, a surface is generated from a point in the vicinity of a hole, and a hole region is filled by generating a point sequence on the surface. This method is suitable for processing to fill a large number of holes because point sequence interpolation can be performed automatically for hole regions without requiring user input.", "title": "" }, { "docid": "5b73883a0bec8434fef8583143dac645", "text": "RC4 is the most widely deployed stream cipher in software applications. In this paper we describe a major statistical weakness in RC4, which makes it trivial to distinguish between short outputs of RC4 and random strings by analyzing their second bytes. This weakness can be used to mount a practical ciphertext-only attack on RC4 in some broadcast applications, in which the same plaintext is sent to multiple recipients under different keys.", "title": "" }, { "docid": "af359933fad5d689718e2464d9c4966c", "text": "Distant supervision can effectively label data for relation extraction, but suffers from the noise labeling problem. Recent works mainly perform soft bag-level noise reduction strategies to find the relatively better samples in a sentence bag, which is suboptimal compared with making a hard decision of false positive samples in sentence level. In this paper, we introduce an adversarial learning framework, which we named DSGAN, to learn a sentencelevel true-positive generator. Inspired by Generative Adversarial Networks, we regard the positive samples generated by the generator as the negative samples to train the discriminator. The optimal generator is obtained until the discrimination ability of the discriminator has the greatest decline. We adopt the generator to filter distant supervision training dataset and redistribute the false positive instances into the negative set, in which way to provide a cleaned dataset for relation classification. The experimental results show that the proposed strategy significantly improves the performance of distant supervision relation extraction comparing to state-of-the-art systems.", "title": "" } ]
scidocsrr
2bbe76bc2462e995c872d9d135f49afc
Activity recognition based on inertial sensors for Ambient Assisted Living
[ { "docid": "d690cfa0fbb63e53e3d3f7a1c7a6a442", "text": "Ambient intelligence has acquired great importance in recent years and requires the development of new innovative solutions. This paper presents a distributed telemonitoring system, aimed at improving healthcare and assistance to dependent people at their homes. The system implements a service-oriented architecture based platform, which allows heterogeneous wireless sensor networks to communicate in a distributed way independent of time and location restrictions. This approach provides the system with a higher ability to recover from errors and a better flexibility to change their behavior at execution time. Preliminary results are presented in this paper.", "title": "" }, { "docid": "9433908587d6cd375cc1927db6414271", "text": "Ambient Assisted Living (AAL) is an emerging multi-disciplinary field aiming at exploiting information and communication technologies in personal healthcare and telehealth systems for countering the effects of growing elderly population. AAL systems are developed for personalized, adaptive, and anticipatory requirements, necessitating high quality-of-service to achieve interoperability, usability, security, and accuracy. The aim of this paper is to provide a comprehensive review of the AAL field with a focus on healthcare frameworks, platforms, standards, and quality attributes. To achieve this, we conducted a literature survey of state-of-the-art AAL frameworks, systems and platforms to identify the essential aspects of AAL systems and investigate the critical issues from the design, technology, quality-of-service, and user experience perspectives. In addition, we conducted an email-based survey for collecting usage data and current status of contemporary AAL systems. We found that most AAL systems are confined to a limited set of features ignoring many of the essential AAL system aspects. Standards and technologies are used in a limited and isolated manner, while quality attributes are often addressed insufficiently. In conclusion, we found that more inter-organizational collaboration, user-centered studies, increased standardization efforts, and a focus on open systems is needed to achieve more interoperable and synergetic AAL solutions.", "title": "" } ]
[ { "docid": "e8ebec3b64e05cad3ab4c9b3d2bfa191", "text": "Multidimensional databases have recently gained widespread acceptance in the commercial world for supporting on-line analytical processing (OLAP) applications. We propose a hypercube-based data model and a few algebraic operations that provide semantic foundation to multidimensional databases and extend their current functionality. The distinguishing feature of the proposed model is the symmetric treatment not only of all dimensions but also measures. The model also is very exible in that it provides support for multiple hierarchies along each dimension and support for adhoc aggregates. The proposed operators are composable, reorderable, and closed in application. These operators are also minimal in the sense that none can be expressed in terms of others nor can any one be dropped without sacri cing functionality. They make possible the declarative speci cation and optimization of multidimensional database queries that are currently speci ed operationally. The operators have been designed to be translated to SQL and can be implemented either on top of a relational database system or within a special purpose multidimensional database engine. In e ect, they provide an algebraic application programming interface (API) that allows the separation of the frontend from the backend. Finally, the proposed model provides a framework in which to study multidimensional databases and opens several new research problems. Current Address: Oracle Corporation, Redwood City, California. Current Address: University of California, Berkeley, California.", "title": "" }, { "docid": "7731c0fa3dcc993532d785a2156f33ea", "text": "Understanding the generalization properties of deep learning models is critical for successful applications, especially in the regimes where the number of training samples is limited. We study the generalization properties of deep neural networks via the empirical Rademacher complexity and show that it is easier to control the complexity of convolutional networks compared to general fully connected networks. In particular, we justify the usage of small convolutional kernels in deep networks as they lead to a better generalization error. Moreover, we propose a representation based regularization method that allows to decrease the generalization error by controlling the coherence of the representation. Experiments on the MNIST dataset support these foundations.", "title": "" }, { "docid": "c65f050e911abb4b58b4e4f9b9aec63b", "text": "The abundant spatial and contextual information provided by the advanced remote sensing technology has facilitated subsequent automatic interpretation of the optical remote sensing images (RSIs). In this paper, a novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning. First, deep Boltzmann machine is adopted to infer the spatial and structural information encoded in the low-level and middle-level features to effectively describe objects in optical RSIs. Then, a novel WSL approach is presented to object detection where the training sets require only binary labels indicating whether an image contains the target object or not. Based on the learnt high-level features, it jointly integrates saliency, intraclass compactness, and interclass separability in a Bayesian framework to initialize a set of training examples from weakly labeled images and start iterative learning of the object detector. A novel evaluation criterion is also developed to detect model drift and cease the iterative learning. Comprehensive experiments on three optical RSI data sets have demonstrated the efficacy of the proposed approach in benchmarking with several state-of-the-art supervised-learning-based object detection approaches.", "title": "" }, { "docid": "942da03bcd01ecdcb7e1334940c7c549", "text": "This paper introduces three classic models of statistical topic models: Latent Semantic Indexing (LSI), Probabilistic Latent Semantic Indexing (PLSI) and Latent Dirichlet Allocation (LDA). Then a method of text classification based on LDA model is briefly described, which uses LDA model as a text representation method. Each document means a probability distribution of fixed latent topic sets. Next, Support Vector Machine (SVM) is chose as classification algorithm. Finally, the evaluation parameters in classification system of LDA with SVM are higher than other two methods which are LSI with SVM and VSM with SVM, showing a better classification performance.", "title": "" }, { "docid": "0a6a170d3ebec3ded7c596d768f9ce85", "text": "This paper presents the method of our submission for THUMOS15 action recognition challenge. We propose a new action recognition system by exploiting very deep twostream ConvNets and Fisher vector representation of iDT features. Specifically, we utilize those successful very deep architectures in images such as GoogLeNet and VGGNet to design the two-stream ConvNets. From our experiments, we see that deeper architectures obtain higher performance for spatial nets. However, for temporal net, deeper architectures could not yield better recognition accuracy. We analyze that the UCF101 dataset is relatively very small and it is very hard to train such deep networks on the current action datasets. Compared with traditional iDT features, our implemented two-stream ConvNets significantly outperform them. We further combine the recognition scores of both two-stream ConvNets and iDT features, and achieve 68% mAP value on the validation dataset of THUMOS15.", "title": "" }, { "docid": "668773ee06fbb728980245cd5a671c0c", "text": "Detection of moving objects in the presence of complex scenes such as dynamic background (e.g, swaying vegetation, ripples in water, spouting fountain), illumination variation, and camouflage is a very challenging task. In this context, we propose a robust background subtraction technique with three contributions. First, we present the use of color difference histogram (CDH) in the background subtraction algorithm. This is done by measuring the color difference between a pixel and its neighbors in a small local neighborhood. The use of CDH reduces the number of false errors due to the non-stationary background, illumination variation and camouflage. Secondly, the color difference is fuzzified with a Gaussian membership function. Finally, a novel fuzzy color difference histogram (FCDH) is proposed by using fuzzy c-means (FCM) clustering and exploiting the CDH. The use of FCM clustering algorithm in CDH reduces the large dimensionality of the histogram bins in the computation and also lessens the effect of intensity variation generated due to the fake motion or change in illumination of the background. The proposed algorithm is tested with various complex scenes of some benchmark publicly available video sequences. It exhibits better performance over the state-of-the-art background subtraction techniques available in the literature in terms of classification accuracy metrics like MCC and PCC.", "title": "" }, { "docid": "422564b9cd5b6766213baaca1ff110ef", "text": "We take the category system in Wikipedia as a conceptual network. We label the semantic relations between categories using methods based on connectivity in the network and lexicosyntactic matching. As a result we are able to derive a large scale taxonomy containing a large amount of subsumption, i.e. isa, relations. We evaluate the quality of the created resource by comparing it with ResearchCyc, one of the largest manually annotated ontologies, as well as computing semantic similarity between words in benchmarking datasets.", "title": "" }, { "docid": "24116898bef26e6327d79d85e8d290fd", "text": "This paper presents an inclusive set of EMTP models used to simulate the cause of voltage sags such as short circuits, transformer energizing, induction motor starting. Voltage sag is usually described as characteristics of both magnitude and duration, but it is also necessary to detect phase angle jump in order to identify sags phenomena and finding the solutions, especially for sags due to short circuits. In case of the simulation of voltage sags due to short circuit, their effect on the magnitude, duration and phase-jump are studied.", "title": "" }, { "docid": "f260bb2ddc4b0b6c855727c2b8c389fb", "text": "At present, medical experts and researchers turn their attention towards using robotic devices to facilitate human limb rehabilitation. An exoskeleton is such a robotic device, which is used to perform rehabilitation, motion assistance and power augmentation tasks. For effective operation, it is supposed to follow the structure and the motion of the natural human limb. This paper propose a robotic rehabilitation exoskeleton with novel shoulder joint actuation mechanism with a moving center of glenohumeral (CGH) joint. The proposed exoskeleton has four active degrees of freedom (DOFs), namely; shoulder flexion/extension, abduction/adduction, pronation/supination (external/internal rotation), and elbow flexion/extension. In addition to those motions mentioned above, three passive DOFs had been introduced to the shoulder joint mechanism in order to provide allowance for the scapular motion of the shoulder. The novel mechanism allows the movement of CGH — joint in two planes; namely frontal plane during shoulder abduction/adduction and transverse plane during flexion/extension. The displacement of the CGH — joint axis was measured experimentally. These results are then incorporated into the novel mechanism, which takes into account the natural movement characteristics of the human shoulder joint. It is intended to reduce excessive stress on patient's upper limb while carrying out rehabilitation exercises.", "title": "" }, { "docid": "c2b3329a849a5554ab6636bf42218519", "text": "Autism spectrum disorders are not rare; many primary care pediatricians care for several children with autism spectrum disorders. Pediatricians play an important role in early recognition of autism spectrum disorders, because they usually are the first point of contact for parents. Parents are now much more aware of the early signs of autism spectrum disorders because of frequent coverage in the media; if their child demonstrates any of the published signs, they will most likely raise their concerns to their child's pediatrician. It is important that pediatricians be able to recognize the signs and symptoms of autism spectrum disorders and have a strategy for assessing them systematically. Pediatricians also must be aware of local resources that can assist in making a definitive diagnosis of, and in managing, autism spectrum disorders. The pediatrician must be familiar with developmental, educational, and community resources as well as medical subspecialty clinics. This clinical report is 1 of 2 documents that replace the original American Academy of Pediatrics policy statement and technical report published in 2001. This report addresses background information, including definition, history, epidemiology, diagnostic criteria, early signs, neuropathologic aspects, and etiologic possibilities in autism spectrum disorders. In addition, this report provides an algorithm to help the pediatrician develop a strategy for early identification of children with autism spectrum disorders. The accompanying clinical report addresses the management of children with autism spectrum disorders and follows this report on page 1162 [available at www.pediatrics.org/cgi/content/full/120/5/1162]. Both clinical reports are complemented by the toolkit titled \"Autism: Caring for Children With Autism Spectrum Disorders: A Resource Toolkit for Clinicians,\" which contains screening and surveillance tools, practical forms, tables, and parent handouts to assist the pediatrician in the identification, evaluation, and management of autism spectrum disorders in children.", "title": "" }, { "docid": "2fbe9db6c676dd64c95e72e8990c63f0", "text": "Community detection is one of the most important problems in the field of complex networks in recent years. Themajority of present algorithms only find disjoint communities, however, community often overlap to some extent in many real-world networks. In this paper, an improvedmulti-objective quantum-behaved particle swarm optimization (IMOQPSO) based on spectral-clustering is proposed to detect the overlapping community structure in complex networks. Firstly, the line graph of the graph modeling the network is formed, and a spectral method is employed to extract the spectral information of the line graph. Secondly, IMOQPSO is employed to solve the multi-objective optimization problem so as to resolve the separated community structure in the line graph which corresponding to the overlapping community structure in the graph presenting the network. Finally, a fine-tuning strategy is adopted to improve the accuracy of community detection. The experiments on both synthetic and real-world networks demonstrate our method achieves cover results which fit the real situation in an even better fashion.", "title": "" }, { "docid": "37f55e03f4d1ff3b9311e537dc7122b5", "text": "Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.", "title": "" }, { "docid": "b1da294b1d8f270cb2bfe0074231209e", "text": "The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them.", "title": "" }, { "docid": "a530b9b997f6e471f74beca325038067", "text": "Do you remember insult sword fi ghting in Monkey Island? The moment when you got off the elevator in the fourth mission of Call of Duty: Modern Warfare 2? Your romantic love affair with Leliana or Alistair in Dragon Age? Dancing as Madison for Paco in his nightclub in Heavy Rain? Climbing and fi ghting Cronos in God of War 3? Some of the most memorable moments from successful video games, have a strong emotional impact on us. It is only natural that game designers and user researchers are seeking methods to better understand the positive and negative emotions that we feel when we are playing games. While game metrics provide excellent methods and techniques to infer behavior from the interaction of the player in the virtual game world, they cannot infer or see emotional signals of a player. Emotional signals are observable changes in the state of the human player, such as facial expressions, body posture, or physiological changes in the player’s body. The human eye can observe facial expression, gestures or human sounds that could tell us how a player is feeling, but covert physiological changes are only revealed to us when using sensor equipment, such as", "title": "" }, { "docid": "d655222bf22e35471b18135b67326ac5", "text": "In this paper we approach the robust motion planning problem through the lens of perception-aware planning, whereby we seek a low-cost motion plan subject to a separate constraint on perception localization quality. To solve this problem we introduce the Multiobjective Perception-Aware Planning (MPAP) algorithm which explores the state space via a multiobjective search, considering both cost and a perception heuristic. This perception-heuristic formulation allows us to both capture the history dependence of localization drift and represent complex modern perception methods. The solution trajectory from this heuristic-based search is then certified via Monte Carlo methods to be robust. The additional computational burden of perception-aware planning is offset through massive parallelization on a GPU. Through numerical experiments the algorithm is shown to find robust solutions in about a second. Finally, we demonstrate MPAP on a quadrotor flying perceptionaware and perception-agnostic plans using Google Tango for localization, finding the quadrotor safely executes the perception-aware plan every time, while crashing over 20% of the time on the perception-agnostic due to loss of localization.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "cf21fd00999dff7d974f39b99e71bb13", "text": "Taking r > 0, let π2r(x) denote the number of prime pairs (p, p+ 2r) with p ≤ x. The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π2r(x) ∼ 2C2r li2(x) with an explicit constant C2r > 0. There seems to be no good conjecture for the remainders ω2r(x) = π2r(x)−2C2r li2(x) that corresponds to Riemann’s formula for π(x)−li(x). However, there is a heuristic approximate formula for averages of the remainders ω2r(x) which is supported by numerical results.", "title": "" }, { "docid": "5fc3cbcca7aba6f48da7df299de4abe2", "text": "1. We studied the responses of 103 neurons in visual area V4 of anesthetized macaque monkeys to two novel classes of visual stimuli, polar and hyperbolic sinusoidal gratings. We suspected on both theoretical and experimental grounds that these stimuli would be useful for characterizing cells involved in intermediate stages of form analysis. Responses were compared with those obtained with conventional Cartesian sinusoidal gratings. Five independent, quantitative analyses of neural responses were carried out on the entire population of cells. 2. For each cell, responses to the most effective Cartesian, polar, and hyperbolic grating were compared directly. In 18 of 103 cells, the peak response evoked by one stimulus class was significantly different from the peak response evoked by the remaining two classes. Of the remaining 85 cells, 74 had response peaks for the three stimulus classes that were all within a factor of 2 of one another. 3. An information-theoretic analysis of the trial-by-trial responses to each stimulus showed that all but two cells transmitted significant information about the stimulus set as a whole. Comparison of the information transmitted about each stimulus class showed that 23 of 103 cells transmitted a significantly different amount of information about one class than about the remaining two classes. Of the remaining 80 cells, 55 had information transmission rates for the three stimulus classes that were all within a factor of 2 of one another. 4. To identify cells that had orderly tuning profiles in the various stimulus spaces, responses to each stimulus class were fit with a simple Gaussian model. Tuning curves were successfully fit to the data from at least one stimulus class in 98 of 103 cells, and such fits were obtained for at least two classes in 87 cells. Individual neurons showed a wide range of tuning profiles, with response peaks scattered throughout the various stimulus spaces; there were no major differences in the distributions of the widths or positions of tuning curves obtained for the different stimulus classes. 5. Neurons were classified according to their response profiles across the stimulus set with two objective methods, hierarchical cluster analysis and multidimensional scaling. These two analyses produced qualitatively similar results. The most distinct group of cells was highly selective for hyperbolic gratings. The majority of cells fell into one of two groups that were selective for polar gratings: one selective for radial gratings and one selective for concentric or spiral gratings. There was no group whose primary selectivity was for Cartesian gratings. 6. To determine whether cells belonging to identified classes were anatomically clustered, we compared the distribution of classified cells across electrode penetrations with the distribution that would be expected if the cells were distributed randomly. Cells with similar response profiles were often anatomically clustered. 7. A position test was used to determine whether response profiles were sensitive to precise stimulus placement. A subset of Cartesian and non-Cartesian gratings was presented at several positions in and near the receptive field. The test was run on 13 cells from the present study and 28 cells from an earlier study. All cells showed a significant degree of invariance in their selectivity across changes in stimulus position of up to 0.5 classical receptive field diameters. 8. A length and width test was used to determine whether cells preferring non-Cartesian gratings were selective for Cartesian grating length or width. Responses to Cartesian gratings shorter or narrower than the classical receptive field were compared with those obtained with full-field Cartesian and non-Cartesian gratings in 29 cells. Of the four cells that had shown significant preferences for non-Cartesian gratings in the main test, none showed tuning for Cartesian grating length or width that would account for their non-Cartesian res", "title": "" }, { "docid": "de6c311c5148ca716aa46ae0f8eeb7fe", "text": "Adversarial machine learning research has recently demonstrated the feasibility to confuse automatic speech recognition (ASR) models by introducing acoustically imperceptible perturbations to audio samples. To help researchers and practitioners gain better understanding of the impact of such attacks, and to provide them with tools to help them more easily evaluate and craft strong defenses for their models, we present Adagio, the first tool designed to allow interactive experimentation with adversarial attacks and defenses on an ASR model in real time, both visually and aurally. Adagio incorporates AMR and MP3 audio compression techniques as defenses, which users can interactively apply to attacked audio samples. We show that these techniques, which are based on psychoacoustic principles, effectively eliminate targeted attacks, reducing the attack success rate from 92.5% to 0%. We will demonstrate Adagio and invite the audience to try it on the Mozilla Common Voice dataset.", "title": "" } ]
scidocsrr
c21dbc365f1389c48f46aefc1c982337
Clustering high-dimensional data: A survey on subspace clustering, pattern-based clustering, and correlation clustering
[ { "docid": "44c0237251d54d6ccccd883bf14c6ff6", "text": "In this paper, we propose a new method for indexing large amounts of point and spatial data in highdimensional space. An analysis shows that index structures such as the R*-tree are not adequate for indexing high-dimensional data sets. The major problem of R-tree-based index structures is the overlap of the bounding boxes in the directory, which increases with growing dimension. To avoid this problem, we introduce a new organization of the directory which uses a split algorithm minimizing overlap and additionally utilizes the concept of supernodes. The basic idea of overlap-minimizing split and supernodes is to keep the directory as hierarchical as possible, and at the same time to avoid splits in the directory that would result in high overlap. Our experiments show that for high-dimensional data, the X-tree outperforms the well-known R*-tree and the TV-tree by up to two orders of magnitude.", "title": "" }, { "docid": "bc49930fa967b93ed1e39b3a45237652", "text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).", "title": "" }, { "docid": "0e644fc1c567356a2e099221a774232c", "text": "We present a coupled two-way clustering approach to gene microarray data analysis. The main idea is to identify subsets of the genes and samples, such that when one of these is used to cluster the other, stable and significant partitions emerge. The search for such subsets is a computationally complex task. We present an algorithm, based on iterative clustering, that performs such a search. This analysis is especially suitable for gene microarray data, where the contributions of a variety of biological mechanisms to the gene expression levels are entangled in a large body of experimental data. The method was applied to two gene microarray data sets, on colon cancer and leukemia. By identifying relevant subsets of the data and focusing on them we were able to discover partitions and correlations that were masked and hidden when the full dataset was used in the analysis. Some of these partitions have clear biological interpretation; others can serve to identify possible directions for future research.", "title": "" } ]
[ { "docid": "c15093ead030ba1aa020a99c312109fa", "text": "Analysts report spending upwards of 80% of their time on problems in data cleaning. The data cleaning process is inherently iterative, with evolving cleaning workflows that start with basic exploratory data analysis on small samples of dirty data, then refine analysis with more sophisticated/expensive cleaning operators (i.e., crowdsourcing), and finally apply the insights to a full dataset. While an analyst often knows at a logical level what operations need to be done, they often have to manage a large search space of physical operators and parameters. We present Wisteria, a system designed to support the iterative development and optimization of data cleaning workflows, especially ones that utilize the crowd. Wisteria separates logical operations from physical implementations, and driven by analyst feedback, suggests optimizations and/or replacements to the analyst’s choice of physical implementation. We highlight research challenges in sampling, in-flight operator replacement, and crowdsourcing. We overview the system architecture and these techniques, then propose a demonstration designed to showcase how Wisteria can improve iterative data analysis and cleaning. The code is available at: http://www.sampleclean.org.", "title": "" }, { "docid": "1e865bd59571b6c1b1012f229efde437", "text": "Do we really need 3D labels in order to learn how to predict 3D? In this paper, we show that one can learn a mapping from appearance to 3D properties without ever seeing a single explicit 3D label. Rather than use explicit supervision, we use the regularity of indoor scenes to learn the mapping in a completely unsupervised manner. We demonstrate this on both a standard 3D scene understanding dataset as well as Internet images for which 3D is unavailable, precluding supervised learning. Despite never seeing a 3D label, our method produces competitive results.", "title": "" }, { "docid": "a4d7596cfcd4a9133c5677a481c88cf0", "text": "The understanding of where humans look in a scene is a problem of great interest in visual perception and computer vision. When eye-tracking devices are not a viable option, models of human attention can be used to predict fixations. In this paper we give two contribution. First, we show a model of visual attention that is simply based on deep convolutional neural networks trained for object classification tasks. A method for visualizing saliency maps is defined which is evaluated in a saliency prediction task. Second, we integrate the information of these maps with a bottom-up differential model of eye-movements to simulate visual attention scanpaths. Results on saliency prediction and scores of similarity with human scanpaths demonstrate the effectiveness of this model.", "title": "" }, { "docid": "2ecd815af00b9961259fa9b2a9185483", "text": "This paper describes the current development status of a mobile robot designed to inspect the outer surface of large oil ship hulls and floating production storage and offloading platforms. These vessels require a detailed inspection program, using several nondestructive testing techniques. A robotic crawler designed to perform such inspections is presented here. Locomotion over the hull is provided through magnetic tracks, and the system is controlled by two networked PCs and a set of custom hardware devices to drive motors, video cameras, ultrasound, inertial platform, and other devices. Navigation algorithm uses an extended-Kalman-filter (EKF) sensor-fusion formulation, integrating odometry and inertial sensors. It was shown that the inertial navigation errors can be decreased by selecting appropriate Q and R matrices in the EKF formulation.", "title": "" }, { "docid": "da4bac81f8544eb729c7e0aafe814927", "text": "This work focuses on representing very high-dimensional global image descriptors using very compact 64-1024 bit binary hashes for instance retrieval. We propose DeepHash: a hashing scheme based on deep networks. Key to making DeepHash work at extremely low bitrates are three important considerations – regularization, depth and fine-tuning – each requiring solutions specific to the hashing problem. In-depth evaluation shows that our scheme consistently outperforms state-of-the-art methods across all data sets for both Fisher Vectors and Deep Convolutional Neural Network features, by up to 20% over other schemes. The retrieval performance with 256-bit hashes is close to that of the uncompressed floating point features – a remarkable 512× compression.", "title": "" }, { "docid": "c7048e00cdb56e2f1085d23b9317c147", "text": "`Design-for-Assembly (DFA)\" is an engineering concept concerned with improving product designs for easier and less costly assembly operations. Much of academic or industrial eeorts in this area have been devoted to the development of analysis tools for measuring the \\assemblability\" of a design. On the other hand, little attention has been paid to the actual redesign process. The goal of this paper is to develop a computer-aided tool for assisting designers in redesigning a product for DFA. One method of redesign, known as the \\replay and modify\" paradigm, is to replay a previous design plan, and modify the plan wherever necessary and possible, in accordance to the original design intention, for newly speciied design goals 24]. The \\replay and modify\" paradigm is an eeective redesign method because it ooers a more global solution than simple local patch-ups. For such a paradigm, design information, such as the design plan and design rationale, must be recorded during design. Unfortunately, such design information is not usually available in practice. To handle the potential absence of the required design information and support the \\replay and modify\" paradigm, the redesign process is modeled as a reverse engineering activity. Reverse engineering roughly refers to an activity of inferring the process, e.g. the design plan, used in creating a given design, and using the inferred knowledge for design recreation or redesign. In this paper, the development of an interactive computer-aided redesign tool for Design-for-Assembly, called REVENGE (REVerse ENGineering), is presented. The architecture of REVENGE is composed of mainly four activities: design analysis, knowledge acquisition, design plan reconstruction, and case-based design modiication. First a DFA analysis is performed to uncover any undesirable aspects of the design with respect to its assemblability. REVENGE , then, interactively solicits designers for useful design information that might not be available from standard design documents such as design rationale. Then, a heuristic algorithm reconstructs a default design plan. A default design plan is a sequence of probable design actions that might have led to the original design. DFA problems identiied during the analysis stage are mapped to the portion of the design plan from which they might have originated. Problems that originate from the earlier portion of the design plan are attacked rst. A case-based approach is used to solve each problem by retrieving a similar redesign case and adapting it to the current situation. REVENGE has been implemented, and has been tested …", "title": "" }, { "docid": "109cf07cb1c8fcfbd6979922d3eee381", "text": "—Presently, information retrieval can be accomplished simply and rapidly with the use of search engines. This allows users to specify the search criteria as well as specific keywords to obtain the required results. Additionally, an index of search engines has to be updated on most recent information as it is constantly changed over time. Particularly, information retrieval results as documents are typically too extensive, which affect on accessibility of the required results for searchers. Consequently, a similarity measurement between keywords and index terms is essentially performed to facilitate searchers in accessing the required results promptly. Thus, this paper proposed the similarity measurement method between words by deploying Jaccard Coefficient. Technically, we developed a measure of similarity Jaccard with Prolog programming language to compare similarity between sets of data. Furthermore, the performance of this proposed similarity measurement method was accomplished by employing precision, recall, and F-measure. Precisely, the test results demonstrated the awareness of advantage and disadvantages of the measurement which were adapted and applied to a search for meaning by using Jaccard similarity coefficient.", "title": "" }, { "docid": "9d3c3a3fa17f47da408be1e24d2121cc", "text": "In this letter, compact substrate integrated waveguide (SIW) power dividers are presented. Both equal and unequal power divisions are considered. A quarter-wavelength long wedge shape SIW structure is used for the power division. Direct coaxial feed is used for the input port and SIW-tomicrostrip transitions are used for the output ports. Four-way equal, unequal and an eight-way equal division power dividers are presented. The four-way and the eight-way power dividers provide -10 dB input matching bandwidth of 39.3% and 13%, respectively, at the design frequency f0 = 2.4 GHz. The main advantage of the power dividers is their compact sizes. Including the microstrip to SIW transitions, size is reduced by at least 46% compared to other reported miniaturized SIW power dividers.", "title": "" }, { "docid": "fd63f9b9454358810a68fc003452509b", "text": "The years that students spend in college are perhaps the most influential years on the rest of their lives. College students face many different decisions day in and day out that may determine how successful they will be in the future. They will choose majors, whether or not to play a sport, which clubs to join, whether they should join a fraternity or sorority, which classes to take, and how much time to spend studying. It is unclear what aspects of college will benefit a person the most down the road. Are some majors better than others? Is earning a high GPA important? Or will simply getting a degree be enough to make a good living? These are a few of the many questions that college students have.", "title": "" }, { "docid": "1a5d9971b674a8d54a0aae7091b02aff", "text": "Controlling the electric appliances is the essential technique in the home automation, and wireless communication between the residence gateway and electric appliances is one of the most important parts in the home network system. In these days, most of the electric appliances are controlled by infrared remote controllers. However, it is very difficult to connect most of the electric appliances to a home network, since the communication protocols are different. In this paper, we propose an integrated remote controller to control electric appliances in the home network with no extra attachment of communication device to the appliances using ZigBee protocol and infrared remote controller technology. The integrated remote controller system for home automation is composed of integrated remote controller, ZigBee to infrared converter, and ZigBee power adapter. ZigBee power adapter is introduced for some appliances which do not have even infrared remote device to be connected in home network. This paper presents a prototype of the proposed system and shows a scheme for the implementation. It provides high flexibility for the users to configure and manage a home network in order to control electric appliances.", "title": "" }, { "docid": "ce1e222bae70cdc4ac22189e4fd9c69f", "text": "In the era of big data, the amount of data that individuals and enterprises hold is increasing, and the efficiency and effectiveness of data analysis are increasingly demanding. Collaborative deep learning, as a machine learning framework that can share users' data and improve learning efficiency, has drawn more and more attention and started to be applied in practical problems. In collaborative deep learning, data sharing and interaction among multi users may lead data leakage especially when data are very sensitive to the user. Therefore, how to protect the data privacy when processing collaborative deep learning becomes an important problem. In this paper, we review the current state of art researches in this field and summarize the application of privacy-preserving technologies in two phases of collaborative deep learning. Finally we discuss the future direction and trend on this problem.", "title": "" }, { "docid": "1aede573b82b9776ac4e4db11cef4157", "text": "In this work, we have designed and implemented a microcontroller-based embedded system for blood pressure monitoring through a PhotoPlethysmoGraphic (PPG) technique. In our system, it is possible to perform PPG measurements via reflectance mode. Hardware novelty of our system consists in the adoption of Silicon PhotoMultiplier detectors. The signal received from the photodetector is used to calculate the instantaneous heart rate and therefore the heart rate variability. The obtained results show that, by using our system, it is possible to easily extract both the PPG and the breath signal. These signals can be used to monitor the patients during the convalescence both in hospital and at home.", "title": "" }, { "docid": "c19658ecdae085902d936f615092fbe5", "text": "Predicting student attrition is an intriguing yet challenging problem for any academic institution. Classimbalanced data is a common in the field of student retention, mainly because a lot of students register but fewer students drop out. Classification techniques for imbalanced dataset can yield deceivingly high prediction accuracy where the overall predictive accuracy is usually driven by the majority class at the expense of having very poor performance on the crucial minority class. In this study, we compared different data balancing techniques to improve the predictive accuracy in minority class while maintaining satisfactory overall classification performance. Specifically, we tested three balancing techniques—oversampling, under-sampling and synthetic minority over-sampling (SMOTE)—along with four popular classification methods—logistic regression, decision trees, neuron networks and support vector machines. We used a large and feature rich institutional student data (between the years 2005 and 2011) to assess the efficacy of both balancing techniques as well as prediction methods. The results indicated that the support vector machine combined with SMOTE data-balancing technique achieved the best classification performance with a 90.24% overall accuracy on the 10-fold holdout sample. All three data-balancing techniques improved the prediction accuracy for the minority class. Applying sensitivity analyses on developed models, we also identified the most important variables for accurate prediction of student attrition. Application of these models has the potential to accurately predict at-risk students and help reduce student dropout rates. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "39a44520b1df1529ea7f89335fc6a19c", "text": "An area-efficient cross feedforward cascode compensation (CFCC) technique is presented for a three-stage amplifier. The proposed amplifier is capable of driving heavy capacitive load at low power consumption but not dedicated to heavy load currents or heavy resistive loading. The CFCC technique enables the nondominant complex poles of the amplifier to be located at high frequencies, resulting in bandwidth extension. The amplifier can be stabilized with a cascode compensation capacitor of only 1.15 pF when driving a 500-pF capacitive load, greatly reducing the overall area of the amplifier. In addition, the presence of two left-hand-plane (LHP) zeros in the proposed scheme improves the phase margin and relaxes the stability criteria. The proposed technique has been implemented and fabricated in a UMC 65-nm CMOS process and it achieves a 2-MHz gain-bandwidth product (GBW) when driving a 500-pF capacitive load by consuming only 20.4 μW at a 1.2-V supply. The proposed compensation technique compares favorably in terms of figures-of-merit (FOM) to previously reported works. Most significantly, the CFCC amplifier achieves the highest load capacitance to total compensation capacitance ratio (CL/CT) of all its counterparts.", "title": "" }, { "docid": "814c69ae155f69ee481255434039b00c", "text": "The introduction of semantics on the web will lead to a new generation of services based on content rather than on syntax. Search engines will provide topic-based searches, retrieving resources conceptually related to the user informational need. Queries will be expressed in several ways, and will be mapped on the semantic level defining topics that must be retrieved from the web. Moving towards this new Web era, effective semantic search engines will provide means for successful searches avoiding the heavy burden experimented by users in a classical query-string based search task. In this paper we propose a search engine based on web resource semantics. Resources to be retrieved are semantically annotated using an existing open semantic elaboration platform and an ontology is used to describe the knowledge domain into which perform queries. Ontology navigation provides semantic level reasoning in order to retrieve meaningful resources with respect to a given information request.", "title": "" }, { "docid": "fa7177c3e65ea78911a953ef75c7cdac", "text": "Schizophrenia for many patients is a lifelong mental disorder with significant consequences on most functional domains. One fifth to one third of patients with schizophrenia experience persistent psychotic symptoms despite adequate trials of antipsychotic treatment, and are considered to have treatment-resistant schizophrenia (TRS). Clozapine is the only medication to demonstrate efficacy for psychotic symptoms in such patients. However, clozapine is not effective in 40%-70% of patients with TRS and it has significant limitations in terms of potentially life-threatening side effects and the associated monitoring. Accordingly, a number of pharmacological and non-pharmacological biological approaches for clozapine-resistant TRS have emerged. This article provides a brief updated critical review of recent therapeutic strategies for TRS, particularly for clozapine-resistant TRS, which include pharmacotherapy, electroconvulsive therapy, repetitive transcranial magnetic stimulation, and transcranial direct current stimulation.", "title": "" }, { "docid": "573bc5d62ce73cd2dc352bece75cedcf", "text": "Software deobfuscation is a crucial activity in security analysis and especially, in malware analysis. While standard static and dynamic approaches suffer from well-known shortcomings, Dynamic Symbolic Execution (DSE) has recently been proposed has an interesting alternative, more robust than static analysis and more complete than dynamic analysis. Yet, DSE addresses certain kinds of questions encountered by a reverser namely feasibility questions. Many issues arising during reverse, e.g. detecting protection schemes such as opaque predicates fall into the category of infeasibility questions. In this article, we present the Backward-Bounded DSE, a generic, precise, efficient and robust method for solving infeasibility questions. We demonstrate the benefit of the method for opaque predicates and call stack tampering, and give some insight for its usage for some other protection schemes. Especially, the technique has successfully been used on state-of-the-art packers as well as on the government-grade X-Tunnel malware – allowing its entire deobfuscation. Backward-Bounded DSE does not supersede existing DSE approaches, but rather complements them by addressing infeasibility questions in a scalable and precise manner. Following this line, we propose sparse disassembly, a combination of Backward-Bounded DSE and static disassembly able to enlarge dynamic disassembly in a guaranteed way, hence getting the best of dynamic and static disassembly. This work paves the way for robust, efficient and precise disassembly tools for heavily-obfuscated binaries.", "title": "" }, { "docid": "0414688abd9c2471bbcbe06a56b134ca", "text": "We provide new theoretical insights on why overparametrization is effective in learning neural networks. For a k hidden node shallow network with quadratic activation and n training data points, we show as long as k ≥ √ 2n, over-parametrization enables local search algorithms to find a globally optimal solution for general smooth and convex loss functions. Further, despite that the number of parameters may exceed the sample size, using theory of Rademacher complexity, we show with weight decay, the solution also generalizes well if the data is sampled from a regular distribution such as Gaussian. To prove when k ≥ √ 2n, the loss function has benign landscape properties, we adopt an idea from smoothed analysis, which may have other applications in studying loss surfaces of neural networks.", "title": "" }, { "docid": "b4409a8e8a47bc07d20cebbfaccb83fd", "text": "We evaluate two decades of proposals to replace text passwords for general-purpose user authentication on the web using a broad set of twenty-five usability, deployability and security benefits that an ideal scheme might provide. The scope of proposals we survey is also extensive, including password management software, federated login protocols, graphical password schemes, cognitive authentication schemes, one-time passwords, hardware tokens, phone-aided schemes and biometrics. Our comprehensive approach leads to key insights about the difficulty of replacing passwords. Not only does no known scheme come close to providing all desired benefits: none even retains the full set of benefits that legacy passwords already provide. In particular, there is a wide range from schemes offering minor security benefits beyond legacy passwords, to those offering significant security benefits in return for being more costly to deploy or more difficult to use. We conclude that many academic proposals have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints. Beyond our analysis of current schemes, our framework provides an evaluation methodology and benchmark for future web authentication proposals.", "title": "" }, { "docid": "9eacc5f0724ff8fe2152930980dded4b", "text": "A computer-controlled adjustable nanosecond pulse generator based on high-voltage MOSFET is designed in this paper, which owns stable performance and miniaturization profile of 32×30×7 cm3. The experiment results show that the pulser can generate electrical pulse with Gaussian rising time of 20 nanosecond, section-adjustable index falling time of 40–200 nanosecond, continuously adjustable repitition frequency of 0–5 kHz, quasi-continuously adjustable amplitude of 0–1 kV at 50 Ω load. And the pulser could meet the requiremen.", "title": "" } ]
scidocsrr
7cb9a42193b0eb31d61a415b67ed3363
Information Theoretic Measures for Clusterings Comparison: Variants, Properties, Normalization and Correction for Chance
[ { "docid": "2e99e535f2605e88571407142e4927ee", "text": "Stability is a common tool to verify the validity of sample based algorithms. In clustering it is widely used to tune the parameters of the algorithm, such as the number k of clusters. In spite of the popularity of stability in practical applications, there has been very little theoretical analysis of this notion. In this paper we provide a formal definition of stability and analyze some of its basic properties. Quite surprisingly, the conclusion of our analysis is that for large sample size, stability is fully determined by the behavior of the objective function which the clustering algorithm is aiming to minimize. If the objective function has a unique global minimizer, the algorithm is stable, otherwise it is unstable. In particular we conclude that stability is not a well-suited tool to determine the number of clusters it is determined by the symmetries of the data which may be unrelated to clustering parameters. We prove our results for center-based clusterings and for spectral clustering, and support our conclusions by many examples in which the behavior of stability is counter-intuitive.", "title": "" }, { "docid": "4cd09cc6aa67d1314ca5de09d1240b65", "text": "A new class of metrics appropriate for measuring effective similarity relations between sequences, say one type of similarity per metric, is studied. We propose a new \"normalized information distance\", based on the noncomputable notion of Kolmogorov complexity, and show that it minorizes every metric in the class (that is, it is universal in that it discovers all effective similarities). We demonstrate that it too is a metric and takes values in [0, 1]; hence it may be called the similarity metric. This is a theory foundation for a new general practical tool. We give two distinctive applications in widely divergent areas (the experiments by necessity use just computable approximations to the target notions). First, we computationally compare whole mitochondrial genomes and infer their evolutionary history. This results in a first completely automatic computed whole mitochondrial phylogeny tree. Secondly, we give fully automatically computed language tree of 52 different language based on translated versions of the \"Universal Declaration of Human Rights\".", "title": "" }, { "docid": "335847313ee670dc0648392c91d8567a", "text": "Several large scale data mining applications, such as text c ategorization and gene expression analysis, involve high-dimensional data that is also inherentl y directional in nature. Often such data is L2 normalized so that it lies on the surface of a unit hyperspher e. Popular models such as (mixtures of) multi-variate Gaussians are inadequate for characteri zing such data. This paper proposes a generative mixture-model approach to clustering directional data based on the von Mises-Fisher (vMF) distribution, which arises naturally for data distributed on the unit hypersphere. In particular, we derive and analyze two variants of the Expectation Maximiza tion (EM) framework for estimating the mean and concentration parameters of this mixture. Nume rical estimation of the concentration parameters is non-trivial in high dimensions since it i nvolves functional inversion of ratios of Bessel functions. We also formulate two clustering algorit hms corresponding to the variants of EM that we derive. Our approach provides a theoretical basis fo r the use of cosine similarity that has been widely employed by the information retrieval communit y, and obtains the spherical kmeans algorithm (kmeans with cosine similarity) as a special case of both variants. Empirical results on clustering of high-dimensional text and gene-expression d ata based on a mixture of vMF distributions show that the ability to estimate the concentration pa rameter for each vMF component, which is not present in existing approaches, yields superior resu lts, especially for difficult clustering tasks in high-dimensional spaces.", "title": "" } ]
[ { "docid": "c9e1c4b2a043ba43fbd07b05e8742e41", "text": "BACKGROUND\nThere has been research on the use of offline video games for therapeutic purposes but online video game therapy is still fairly under-researched. Online therapeutic interventions have only recently included a gaming component. Hence, this review represents a timely first step toward taking advantage of these recent technological and cultural innovations, particularly for the treatment of special-needs groups such as the young, the elderly and people with various conditions such as ADHD, anxiety and autism spectrum disorders.\n\n\nMATERIAL\nA review integrating research findings on two technological advances was conducted: the home computer boom of the 1980s, which triggered a flood of research on therapeutic video games for the treatment of various mental health conditions; and the rise of the internet in the 1990s, which caused computers to be seen as conduits for therapeutic interaction rather than replacements for the therapist.\n\n\nDISCUSSION\nWe discuss how video games and the internet can now be combined in therapeutic interventions, as attested by a consideration of pioneering studies.\n\n\nCONCLUSION\nFuture research into online video game therapy for mental health concerns might focus on two broad types of game: simple society games, which are accessible and enjoyable to players of all ages, and online worlds, which offer a unique opportunity for narrative content and immersive remote interaction with therapists and fellow patients. Both genres might be used for assessment and training purposes, and provide an unlimited platform for social interaction. The mental health community can benefit from more collaborative efforts between therapists and engineers, making such innovations a reality.", "title": "" }, { "docid": "e9dc7d048b53ec9649dec65e05a77717", "text": "Recent advances in object detection have exploited object proposals to speed up object searching. However, many of existing object proposal generators have strong localization bias or require computationally expensive diversification strategies. In this paper, we present an effective approach to address these issues. We first propose a simple and useful localization bias measure, called superpixel tightness. Based on the characteristics of superpixel tightness distribution, we propose an effective method, namely multi-thresholding straddling expansion (MTSE) to reduce localization bias via fast diversification. Our method is essentially a box refinement process, which is intuitive and beneficial, but seldom exploited before. The greatest benefit of our method is that it can be integrated into any existing model to achieve consistently high recall across various intersection over union thresholds. Experiments on PASCAL VOC dataset demonstrates that our approach improves numerous existing models significantly with little computational overhead.", "title": "" }, { "docid": "bde253462808988038235a46791affc1", "text": "Power electronic Grid-Connected Converters (GCCs) are widely applied as grid interface in renewable energy sources. This paper proposes an extended Direct Power Control with Space Vector Modulation (DPC-SVM) scheme with improved operation performance under grid distortions. The real-time operated DPC-SVM scheme has to execute several important tasks as: space vector pulse width modulation, active and reactive power feedback control, grid current harmonics and voltage dips compensation. Thus, development and implementation of the DPC-SVM algorithm using single chip floating-point microcontroller TMS320F28335 is described. It combines large peripheral equipment, typical for microcontrollers, with high computation capacity characteristic for Digital Signal Processors (DSPs). The novelty of the proposed system lies in extension of the generic DPC-SVM scheme by additional higher harmonic and voltage dips compensation modules and implementation of the whole algorithm in a single chip floating point microcontroller. Overview of the laboratory setup, description of basic algorithm subtasks sequence, software optimization as well as execution time of specific program modules on fixed-point and floating-point processors are discussed. Selected oscillograms illustrating operation and robustness of the developed algorithm used in 5 kVA laboratory model of the GCC are presented.", "title": "" }, { "docid": "0123fd04bc65b8dfca7ff5c058d087da", "text": "The authors forward the hypothesis that social exclusion is experienced as painful because reactions to rejection are mediated by aspects of the physical pain system. The authors begin by presenting the theory that overlap between social and physical pain was an evolutionary development to aid social animals in responding to threats to inclusion. The authors then review evidence showing that humans demonstrate convergence between the 2 types of pain in thought, emotion, and behavior, and demonstrate, primarily through nonhuman animal research, that social and physical pain share common physiological mechanisms. Finally, the authors explore the implications of social pain theory for rejection-elicited aggression and physical pain disorders.", "title": "" }, { "docid": "127ba400911644a0a4e2d0f7bbb694b2", "text": "From autonomous cars and adaptive email-filters to predictive policing systems, machine learning (ML) systems are increasingly ubiquitous; they outperform humans on specific tasks [Mnih et al., 2013, Silver et al., 2016, Hamill, 2017] and often guide processes of human understanding and decisions [Carton et al., 2016, Doshi-Velez et al., 2014]. The deployment of ML systems in complex applications has led to a surge of interest in systems optimized not only for expected task performance but also other important criteria such as safety [Otte, 2013, Amodei et al., 2016, Varshney and Alemzadeh, 2016], nondiscrimination [Bostrom and Yudkowsky, 2014, Ruggieri et al., 2010, Hardt et al., 2016], avoiding technical debt [Sculley et al., 2015], or providing the right to explanation [Goodman and Flaxman, 2016]. For ML systems to be used safely, satisfying these auxiliary criteria is critical. However, unlike measures of performance such as accuracy, these criteria often cannot be completely quantified. For example, we might not be able to enumerate all unit tests required for the safe operation of a semi-autonomous car or all confounds that might cause a credit scoring system to be discriminatory. In such cases, a popular fallback is the criterion of interpretability : if the system can explain its reasoning, we then can verify whether that reasoning is sound with respect to these auxiliary criteria. Unfortunately, there is little consensus on what interpretability in machine learning is and how to evaluate it for benchmarking. Current interpretability evaluation typically falls into two categories. The first evaluates interpretability in the context of an application: if the system is useful in either a practical application or a simplified version of it, then it must be somehow interpretable (e.g. Ribeiro et al. [2016], Lei et al. [2016], Kim et al. [2015a], Doshi-Velez et al. [2015], Kim et al. [2015b]). The second evaluates interpretability via a quantifiable proxy: a researcher might first claim that some model class—e.g. sparse linear models, rule lists, gradient boosted trees—are interpretable and then present algorithms to optimize within that class (e.g. Bucilu et al. [2006], Wang et al. [2017], Wang and Rudin [2015], Lou et al. [2012]). To large extent, both evaluation approaches rely on some notion of “you’ll know it when you see it.” Should we be concerned about a lack of rigor? Yes and no: the notions of interpretability above appear reasonable because they are reasonable: they meet the first test of having facevalidity on the correct test set of subjects: human beings. However, this basic notion leaves many kinds of questions unanswerable: Are all models in all defined-to-be-interpretable model classes equally interpretable? Quantifiable proxies such as sparsity may seem to allow for comparison, but how does one think about comparing a model sparse in features to a model sparse in prototypes? Moreover, do all applications have the same interpretability needs? If we are to move this field forward—to compare methods and understand when methods may generalize—we need to formalize these notions and make them evidence-based. The objective of this review is to chart a path toward the definition and rigorous evaluation of interpretability. The need is urgent: recent European Union regulation will require algorithms", "title": "" }, { "docid": "42b8163ac8544dae2060f903c377b201", "text": "Cloud storage systems are currently very popular, generating a large amount of traffic. Indeed, many companies offer this kind of service, including worldwide providers such as Dropbox, Microsoft and Google. These companies, as well as new providers entering the market, could greatly benefit from knowing typical workload patterns that their services have to face in order to develop more cost-effective solutions. However, despite recent analyses of typical usage patterns and possible performance bottlenecks, no previous work investigated the underlying client processes that generate workload to the system. In this context, this paper proposes a hierarchical two-layer model for representing the Dropbox client behavior. We characterize the statistical parameters of the model using passive measurements gathered in 3 different network vantage points. Our contributions can be applied to support the design of realistic synthetic workloads, thus helping in the development and evaluation of new, well-performing personal cloud storage services.", "title": "" }, { "docid": "c3152bfcbae60b5b5aaa1c64146538d8", "text": "BACKGROUND AND PURPOSE\nIn clinical trials and observational studies there is considerable inconsistency in the use of definitions to describe delayed cerebral ischemia (DCI) after aneurysmal subarachnoid hemorrhage. A major cause for this inconsistency is the combining of radiographic evidence of vasospasm with clinical features of cerebral ischemia, although multiple factors may contribute to DCI. The second issue is the variability and overlap of terms used to describe each phenomenon. This makes comparisons among studies difficult.\n\n\nMETHODS\nAn international ad hoc panel of experts involved in subarachnoid hemorrhage research developed and proposed a definition of DCI to be used as an outcome measure in clinical trials and observational studies. We used a consensus-building approach.\n\n\nRESULTS\nIt is proposed that in observational studies and clinical trials aiming to investigate strategies to prevent DCI, the 2 main outcome measures should be: (1) cerebral infarction identified on CT or MRI or proven at autopsy, after exclusion of procedure-related infarctions; and (2) functional outcome. Secondary outcome measure should be clinical deterioration caused by DCI, after exclusion of other potential causes of clinical deterioration. Vasospasm on angiography or transcranial Doppler can also be used as an outcome measure to investigate proof of concept but should be interpreted in conjunction with DCI or functional outcome.\n\n\nCONCLUSIONS\nThe proposed measures reflect the most relevant morphological and clinical features of DCI without regard to pathogenesis to be used as an outcome measure in clinical trials and observational studies.", "title": "" }, { "docid": "f354fec9ea2fc5d78f105cd1921a5137", "text": "Network embedding has recently attracted lots of attentions in data mining. Existing network embedding methods mainly focus on networks with pairwise relationships. In real world, however, the relationships among data points could go beyond pairwise, i.e., three or more objects are involved in each relationship represented by a hyperedge, thus forming hyper-networks. These hyper-networks pose great challenges to existing network embedding methods when the hyperedges are indecomposable, that is to say, any subset of nodes in a hyperedge cannot form another hyperedge. These indecomposable hyperedges are especially common in heterogeneous networks. In this paper, we propose a novel Deep Hyper-Network Embedding (DHNE) model to embed hypernetworks with indecomposable hyperedges. More specifically, we theoretically prove that any linear similarity metric in embedding space commonly used in existing methods cannot maintain the indecomposibility property in hypernetworks, and thus propose a new deep model to realize a non-linear tuplewise similarity function while preserving both local and global proximities in the formed embedding space. We conduct extensive experiments on four different types of hyper-networks, including a GPS network, an online social network, a drug network and a semantic network. The empirical results demonstrate that our method can significantly and consistently outperform the state-of-the-art algorithms.", "title": "" }, { "docid": "265d69d874481270c26eb371ca05ac51", "text": "A compact dual-band dual-polarized antenna is proposed in this paper. The two pair dipoles with strong end coupling are used for the lower frequency band, and cross-placed patch dipoles are used for the upper frequency band. The ends of the dipoles for lower frequency band are bent to increase the coupling between adjacent dipoles, which can benefit the compactness and bandwidth of the antenna. Breaches are introduced at the ends of the dipoles of the upper band, which also benefit the compactness and matching of the antenna. An antenna prototype was fabricated and measured. The measured results show that the antenna can cover from 790 MHz to 960 MHz (19.4%) for lower band and from 1710 MHz to 2170 MHz (23.7%) for upper band with VSWR < 1.5. It is expected to be a good candidate design for base station antennas.", "title": "" }, { "docid": "744d7ce024289df3f32c0d5d3ec6becf", "text": "Three homeotic mutants, aristapedia (ssa and ssa-UCl) and Nasobemia (Ns) which involve antenna-leg transformations were analyzed with respect to their time of expression. In particular we studied the question of whether these mutations are expressed when the mutant cells pass through additional cell divisions in culture. Mutant antennal discs were cultured in vivo and allowed to duplicate the antennal anlage. Furthermore, regeneration of the mutant antennal anlage was obtained by culturing eye discs and a particular fragment of the eye disc. Both duplicated and regenerated antennae showed at least a partial transformation into leg structures which indicates that the mutant gene is expressed during proliferation in culture.", "title": "" }, { "docid": "ac8cef535e5038231cdad324325eaa37", "text": "There are mainly two types of Emergent Self-Organizing Maps (ESOM) grid structures in use: hexgrid (honeycomb like) and quadgrid (trellis like) maps. In addition to that, the shape of the maps may be square or rectangular. This work investigates the effects of these different map layouts. Hexgrids were found to have no convincing advantage over quadgrids. Rectangular maps, however, are distinctively superior to square maps. Most surprisingly, rectangular maps outperform square maps for isotropic data, i.e. data sets with no particular primary direction.", "title": "" }, { "docid": "2409f9a37398dbff4306930280c76e81", "text": "OBJECTIVES\nThe dose-response relationship for hand-transmitted vibration has been investigated extensively in temperate environments. Since the clinical features of hand-arm vibration syndrome (HAVS) differ between the temperate and tropical environment, we conducted this study to investigate the dose-response relationship of HAVS in a tropical environment.\n\n\nMETHODS\nA total of 173 male construction, forestry and automobile manufacturing plant workers in Malaysia were recruited into this study between August 2011 and 2012. The participants were interviewed for history of vibration exposure and HAVS symptoms, followed by hand functions evaluation and vibration measurement. Three types of vibration doses-lifetime vibration dose (LVD), total operating time (TOT) and cumulative exposure index (CEI)-were calculated and its log values were regressed against the symptoms of HAVS. The correlation between each vibration exposure dose and the hand function evaluation results was obtained.\n\n\nRESULTS\nThe adjusted prevalence ratio for finger tingling and numbness was 3.34 (95% CI 1.27 to 8.98) for subjects with lnLVD≥20 ln m(2) s(-4) against those <16 ln m(2) s(-4). Similar dose-response pattern was found for CEI but not for TOT. No subject reported white finger. The prevalence of finger coldness did not increase with any of the vibration doses. Vibrotactile perception thresholds correlated moderately with lnLVD and lnCEI.\n\n\nCONCLUSIONS\nThe dose-response relationship of HAVS in a tropical environment is valid for finger tingling and numbness. The LVD and CEI are more useful than TOT when evaluating the dose-response pattern of a heterogeneous group of vibratory tools workers.", "title": "" }, { "docid": "10124ea154b8704c3a6aaec7543ded57", "text": "Tomato bacterial wilt and canker, caused by Clavibacter michiganensis subsp. michiganensis (Cmm) is considered one of the most important bacterial diseases of tomato worldwide. During the last two decades, severe outbreaks have occurred in greenhouses in the horticultural belt of Buenos Aires-La Plata, Argentina. Cmm strains collected in this area over a period of 14 years (2000–2013) were characterized for genetic diversity by rep-PCR genomic fingerprinting and level of virulence in order to have a better understanding of the source of inoculum and virulence variability. Analyses of BOX-, ERIC- and REP-PCR fingerprints revealed that the strains were genetically diverse; the same three fingerprint types were obtained in all three cases. No relationship could be established between rep-PCR clustering and the year, location or greenhouse origin of isolates, which suggests different sources of inoculum. However, in a few cases, bacteria with identical fingerprint types were isolated from the same greenhouse in different years. Despite strains differing in virulence, particularly within BOX-PCR groups, putative virulence genes located in plasmids (celA, pat-1) or in a pathogenicity island in the chromosome (tomA, chpC, chpG and ppaA) were detected in all strains. Our results suggest that new strains introduced every year via seed importation might be coexisting with others persisting locally. This study highlights the importance of preventive measures to manage tomato bacterial wilt and canker.", "title": "" }, { "docid": "bbfc488e55fe2dfaff2af73a75c31edd", "text": "This overview covers a wide range of cannabis topics, initially examining issues in dispensaries and self-administration, plus regulatory requirements for production of cannabis-based medicines, particularly the Food and Drug Administration \"Botanical Guidance.\" The remainder pertains to various cannabis controversies that certainly require closer examination if the scientific, consumer, and governmental stakeholders are ever to reach consensus on safety issues, specifically: whether botanical cannabis displays herbal synergy of its components, pharmacokinetics of cannabis and dose titration, whether cannabis medicines produce cyclo-oxygenase inhibition, cannabis-drug interactions, and cytochrome P450 issues, whether cannabis randomized clinical trials are properly blinded, combatting the placebo effect in those trials via new approaches, the drug abuse liability (DAL) of cannabis-based medicines and their regulatory scheduling, their effects on cognitive function and psychiatric sequelae, immunological effects, cannabis and driving safety, youth usage, issues related to cannabis smoking and vaporization, cannabis concentrates and vape-pens, and laboratory analysis for contamination with bacteria and heavy metals. Finally, the issue of pesticide usage on cannabis crops is addressed. New and disturbing data on pesticide residues in legal cannabis products in Washington State are presented with the observation of an 84.6% contamination rate including potentially neurotoxic and carcinogenic agents. With ongoing developments in legalization of cannabis in medical and recreational settings, numerous scientific, safety, and public health issues remain.", "title": "" }, { "docid": "753e0af8b59c8bfd13b63c3add904ffe", "text": "Background: Surgery of face and parotid gland may cause injury to branches of the facial nerve, which results in paralysis of muscles of facial expression. Knowledge of branching patterns of the facial nerve and reliable landmarks of the surrounding structures are essential to avoid this complication. Objective: Determine the facial nerve branching patterns, the course of the marginal mandibular branch (MMB), and the extraparotid ramification in relation to the lateral palpebral line (LPL). Materials and methods: One hundred cadaveric half-heads were dissected for determining the facial nerve branching patterns according to the presence of anastomosis between branches. The course of the MMB was followed until it entered the depressor anguli oris in 49 specimens. The vertical distance from the mandibular angle to this branch was measured. The horizontal distance from the LPL to the otobasion superious (LPL-OBS) and the apex of the parotid gland (LPL-AP) were measured in 52 specimens. Results: The branching patterns of the facial nerve were categorized into six types. The least common (1%) was type I (absent of anastomosis), while type V, the complex pattern was the most common (29%). Symmetrical branching pattern occurred in 30% of cases. The MMB was coursing below the lower border of the mandible in 57% of cases. The mean vertical distance was 0.91±0.22 cm. The mean horizontal distances of LPL-OBS and LPLAP were 7.24±0.6 cm and 3.95±0.96 cm, respectively. The LPL-AP length was 54.5±11.4% of LPL-OBS. Conclusion: More complex branching pattern of the facial nerve was found in this population and symmetrical branching pattern occurred less of ten. The MMB coursed below the lower border of the angle of mandible with a mean vertical distance of one centimeter. The extraparotid ramification of the facial nerve was located in the area between the apex of the parotid gland and the LPL.", "title": "" }, { "docid": "9c18c6c79c8588e587dc1061eae7fa21", "text": "BACKGROUND\nThe safety and tolerability of the selective serotonin reuptake inhibitors and the newer atypical agents have led to a significant increase in antidepressant use. These changes raise concern as to the likelihood of a corresponding increase in adverse behavioral reactions attributable to these drugs.\n\n\nMETHOD\nAll admissions to a university-based general hospital psychiatric unit during a 14-month period were reviewed.\n\n\nRESULTS\nForty-three (8.1%) of 533 patients were found to have been admitted owing to antidepressant-associated mania or psychosis.\n\n\nCONCLUSION\nDespite the positive changes in the side effect profile of antidepressant drugs, the rate of admissions due to antidepressant-associated adverse behavioral effects remains significant.", "title": "" }, { "docid": "dbb4540af2166d4292253b17ce1ff68f", "text": "On average, men outperform women on mental rotation tasks. Even boys as young as 4 1/2 perform better than girls on simplified spatial transformation tasks. The goal of our study was to explore ways of improving 5-year-olds' performance on a spatial transformation task and to examine the strategies children use to solve this task. We found that boys performed better than girls before training and that both boys and girls improved with training, whether they were given explicit instruction or just practice. Regardless of training condition, the more children gestured about moving the pieces when asked to explain how they solved the spatial transformation task, the better they performed on the task, with boys gesturing about movement significantly more (and performing better) than girls. Gesture thus provides useful information about children's spatial strategies, raising the possibility that gesture training may be particularly effective in improving children's mental rotation skills.", "title": "" }, { "docid": "773a46b340c1d98012c8c00c72308359", "text": "The complexity of many image processing applications and their stringent performance requirements have come to a point where they can no longer meet the real time deadlines, if implemented on conventional architectures based on a single general-purpose processor. Acceleration of these algorithms can be done by parallel computing. Parallelism can be accomplished both at hardware and software levels by various tools and methodologies. The various methods hence discussed prove to be helpful and thus a combination of both the custom hardware and software tool helps in speeding up the image processing algorithm. Different methodologies that can be used for parallel computation are discussed.", "title": "" }, { "docid": "8b4285fa5b46b2eb58a06e5f5ba46b1e", "text": "Many firms develop an information technology strategy that includes the use of business intelligence software in the decision making process. In order to really achieve a solid return on investment on this type of software, the firm should have at least 10 years of detailed data on sales, purchases, staff costs, and other items that impact the overall cost of providing a service or good. Data cubes and reports can then be built to show trends, identify product success and failures, and provide a more holistic view of company activity. This paper describes such software “Business Intelligence System for Banking and Finance”.", "title": "" }, { "docid": "ee38062c7c479cfc9d8e9fc0982a9ae3", "text": "Integrating data from heterogeneous sources is often modeled as merging graphs. Given two ormore “compatible”, but not-isomorphic graphs, the first step is to identify a graph alignment, where a potentially partial mapping of vertices between two graphs is computed. A significant portion of the literature on this problem only takes the global structure of the input graphs into account. Only more recent ones additionally use vertex and edge attributes to achieve a more accurate alignment. However, these methods are not designed to scale to map large graphs arising in many modern applications. We propose a new iterative graph aligner, gsaNA, that uses the global structure of the graphs to significantly reduce the problem size and align large graphs with a minimal loss of information. Concretely, we show that our proposed technique is highly flexible, can be used to achieve higher recall, and it is orders of magnitudes faster than the current state of the art techniques. ACM Reference format: Abdurrahman Yaşar and Ümit V. Çatalyürek. 2018. An Iterative Global Structure-Assisted Labeled Network Aligner. In Proceedings of Special Interest Group on Knowledge Discovery and Data Mining, London, England, August 18 (SIGKDD’18), 10 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn", "title": "" } ]
scidocsrr
34408649dc78618fc3a17e2e44de7d88
Artificial neural networks in business: Two decades of research
[ { "docid": "4eda5bc4f8fa55ae55c69f4233858fc7", "text": "In this paper, we set out to compare several techniques that can be used in the analysis of imbalanced credit scoring data sets. In a credit scoring context, imbalanced data sets frequently occur as the number of defaulting loans in a portfolio is usually much lower than the number of observations that do not default. As well as using traditional classification techniques such as logistic regression, neural networks and decision trees, this paper will also explore the suitability of gradient boosting, least square support vector machines and random forests for loan default prediction. Five real-world credit scoring data sets are used to build classifiers and test their performance. In our experiments, we progressively increase class imbalance in each of these data sets by randomly undersampling the minority class of defaulters, so as to identify to what extent the predictive power of the respective techniques is adversely affected. The performance criterion chosen to measure this effect is the area under the receiver operating characteristic curve (AUC); Friedman’s statistic and Nemenyi post hoc tests are used to test for significance of AUC differences between techniques. The results from this empirical study indicate that the random forest and gradient boosting classifiers perform very well in a credit scoring context and are able to cope comparatively well with pronounced class imbalances in these data sets. We also found that, when faced with a large class imbalance, the C4.5 decision tree algorithm, quadratic discriminant analysis and k-nearest neighbours perform significantly worse than the best performing classifiers. 2011 Elsevier Ltd.", "title": "" }, { "docid": "6f46e0d6ea3fb99c6e6a1d5907995e87", "text": "The study of financial markets has been addressed in many works during the last years. Different methods have been used in order to capture the non-linear behavior which is characteristic of these complex systems. The development of profitable strategies has been associated with the predictive character of the market movement, and special attention has been devoted to forecast the trends of financial markets. This work performs a predictive study of the principal index of the Brazilian stock market through artificial neural networks and the adaptive exponential smoothing method, respectively. The objective is to compare the forecasting performance of both methods on this market index, and in particular, to evaluate the accuracy of both methods to predict the sign of the market returns. Also the influence on the results of some parameters associated to both methods is studied. Our results show that both methods produce similar results regarding the prediction of the index returns. On the contrary, the neural networks outperform the adaptive exponential smoothing method in the forecasting of the market movement, with relative hit rates similar to the ones found in other developed markets. 2009 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "fcbfa224b2708839e39295f24f4405e1", "text": "A dataset is imbalanced if the classification categories are not approximately equally represented. Recent years brought increased interest in applying machine learning techniques to difficult \"real-world\" problems, many of which are characterized by imbalanced data. Additionally the distribution of the testing data may differ from that of the training data, and the true misclassification costs may be unknown at learning time. Predictive accuracy, a popular choice for evaluating performance of a classifier, might not be appropriate when the data is imbalanced andlor the costs of different errors vary markedly. In this Chapter, we discuss some of the sampling techniques used for balancing the datasets, and the performance measures more appropriate for mining imbalanced datasets.", "title": "" }, { "docid": "3bd639feecf4194c73c3e20ae4ef8203", "text": "We present an optimized implementation of the Fan-Vercauteren variant of Brakerski’s scale-invariant homomorphic encryption scheme. Our algorithmic improvements focus on optimizing decryption and homomorphic multiplication in the Residue Number System (RNS), using the Chinese Remainder Theorem (CRT) to represent and manipulate the large coefficients in the ciphertext polynomials. In particular, we propose efficient procedures for scaling and CRT basis extension that do not require translating the numbers to standard (positional) representation. Compared to the previously proposed RNS design due to Bajard et al. [3], our procedures are simpler and faster, and introduce a lower amount of noise. We implement our optimizations in the PALISADE library and evaluate the runtime performance for the range of multiplicative depths from 1 to 100. For example, homomorphic multiplication for a depth-20 setting can be executed in 62 ms on a modern server system, which is already practical for some outsourced-computing applications. Our algorithmic improvements can also be applied to other scale-invariant homomorphic encryption schemes, such as YASHE.", "title": "" }, { "docid": "c128f4a9b3ea59215234b96573b1f266", "text": "Intraocular pressure (IOP) is important for the prevention and treatment of certain human eye diseases. For example, glaucoma is the second leading cause of blindness in the world according to the World Health Organization [1]. The majority of glaucoma patients have an IOP >; 20 mmHg (compared with a normal IOP of 10 mmHg), which could damage patients optic nerves in the backside of the eye and cause irreversible blindness. Currently, there is no cure for glaucoma, but with early diagnosis and proper treatment, the visual loss can be slowed or eliminated. Due to the lack of other symptoms or pain, and the eye's ability to compensate for loss of peripheral vision, many glaucoma patients are unaware of the disease's development until it is severe. In fact, only half of the patients in the United States are aware of having glaucoma. Therefore, early diagnosis and treatment are important to prevent blindness. Thus, a device to diagnose early-stage glaucoma is in demand.", "title": "" }, { "docid": "49c19e5417aa6a01c59f666ba7cc3522", "text": "The effect of various drugs on the extracellular concentration of dopamine in two terminal dopaminergic areas, the nucleus accumbens septi (a limbic area) and the dorsal caudate nucleus (a subcortical motor area), was studied in freely moving rats by using brain dialysis. Drugs abused by humans (e.g., opiates, ethanol, nicotine, amphetamine, and cocaine) increased extracellular dopamine concentrations in both areas, but especially in the accumbens, and elicited hypermotility at low doses. On the other hand, drugs with aversive properties (e.g., agonists of kappa opioid receptors, U-50,488, tifluadom, and bremazocine) reduced dopamine release in the accumbens and in the caudate and elicited hypomotility. Haloperidol, a neuroleptic drug, increased extracellular dopamine concentrations, but this effect was not preferential for the accumbens and was associated with hypomotility and sedation. Drugs not abused by humans [e.g., imipramine (an antidepressant), atropine (an antimuscarinic drug), and diphenhydramine (an antihistamine)] failed to modify synaptic dopamine concentrations. These results provide biochemical evidence for the hypothesis that stimulation of dopamine transmission in the limbic system might be a fundamental property of drugs that are abused.", "title": "" }, { "docid": "1176abf11f866dda3a76ce080df07c05", "text": "Google Flu Trends can detect regional outbreaks of influenza 7-10 days before conventional Centers for Disease Control and Prevention surveillance systems. We describe the Google Trends tool, explain how the data are processed, present examples, and discuss its strengths and limitations. Google Trends shows great promise as a timely, robust, and sensitive surveillance system. It is best used for surveillance of epidemics and diseases with high prevalences and is currently better suited to track disease activity in developed countries, because to be most effective, it requires large populations of Web search users. Spikes in search volume are currently hard to interpret but have the benefit of increasing vigilance. Google should work with public health care practitioners to develop specialized tools, using Google Flu Trends as a blueprint, to track infectious diseases. Suitable Web search query proxies for diseases need to be established for specialized tools or syndromic surveillance. This unique and innovative technology takes us one step closer to true real-time outbreak surveillance.", "title": "" }, { "docid": "03fc8841a0dfad9e6027dfd9e9263a7f", "text": "Although the literature on alternatives to effect indicators is growing, there has been little attention given to evaluating causal and composite (formative) indicators. This paper provides an overview of this topic by contrasting ways of assessing the validity of effect and causal indicators in structural equation models (SEMs). It also draws a distinction between composite (formative) indicators and causal indicators and argues that validity is most relevant to the latter. Sound validity assessment of indicators is dependent on having an adequate overall model fit and on the relative stability of the parameter estimates for the latent variable and indicators as they appear in different models. If the overall fit and stability of estimates are adequate, then a researcher can assess validity using the unstandardized and standardized validity coefficients and the unique validity variance estimate. With multiple causal indicators or with effect indicators influenced by multiple latent variables, collinearity diagnostics are useful. These results are illustrated with a number of correctly and incorrectly specified hypothetical models.", "title": "" }, { "docid": "150f4a4f06424b756d14be417c707df3", "text": "Spiders occupy most of the ecological niches of the planet, revealing a huge adaptive plasticity, reflected in the chemical diversity of their venom toxins. The spiders are distributed throughout the planet, adapting themselves to many P.C. Gomes Department of Biology/CEIS/Institute of Biosciences of Rio Claro, University of São Paulo State (UNESP), Rio Claro, SP, Brazil e-mail: pccesar@rc.unesp.br M.S. Palma (*) Department of Biology, CEIS, Laboratory of Structural Biology and Zoochemistry, Sao Paulo, State University (UNESP), Institute of Biosciences, Rio Claro, SP, Brazil e-mail: mspalma@rc.unesp.br # Springer Science+Business Media Dordrecht 2016 P. Gopalakrishnakone et al. (eds.), Spider Venoms, Toxinology, DOI 10.1007/978-94-007-6389-0_14 3 different environments, to form the largest taxonomic group of organisms with a diet exclusively carnivorous. The organic low-molecular-mass compounds present in spider venoms are used both for defensive purposes and to paralyze/kill their preys. Among the low-molecular-mass organic compounds present in spider venoms, the most common ones are free organic acids, amino acids, biogenic amines, and neurotransmitters. These compounds were also used in the course of evolution as substrates for the biosynthesis of novel spider toxins, which were neglected by the toxinology during a long time, mainly due to the difficulties to isolate and to assign the chemical structures of very low abundant compounds. However, the recent technological advances in the spectroscopic techniques used for structural analysis of small molecules allowed the structural elucidation of many of these toxins in spider venoms, permitting the identification of at least six families of low-molecular-mass toxins in spider venoms: (i) acylpolyamines, (ii) nucleoside analogs, (iii) bis(agmatine)oxalamide, (iv) the betacarboline alkaloids, (v) organometallic diazenaryl compounds, and (vi) dioxopiperidinic analogs. Investigations of structure/activity relationship of these toxins revealed that some of them have been identified both as interesting tools for chemical investigations in neurobiology and as potential models for the rational development of novel drugs for neurotherapeutic uses, as well as for developing specific insecticides. List of Abbreviations 13C Carbon-13 1H Hydrogen-1 ALS Amyotrophic lateral sclerosis AMPA α-Amino-3-hydroxy-5-methylisoxasole-4-propionic acid CID Collisional-induced dissociation CNS Central nervous systems COSY Homonuclear correlation spectroscopy dqf COSY Double-quantum-filter-COSY ESI-MS Electron spray ionization mass spectrometry FRIT-FAB Continuous-flow fast atom bombardment FTX Funnel web toxin GABA Gamma-aminobutyric acid Glu-R Glutamate receptor HMBC Heteronuclear multiple bond coherence HMQC Heteronuclear multiple quantum coherence HPLC High-performance liquid chromatography HRMS High-resolution mass spectrometry JSTX Joro spider toxin KA Kainic acid kDa Kilodalton L-Arg-3,4 L-Arginyl-3,4-spermidine LC-MS Liquid chromatography mass spectrometry LMM Low molecular mass 4 P.C. Gomes and M.S. Palma", "title": "" }, { "docid": "6300234fd4ed55285459b8561b5c0ed0", "text": "In conventional power system operation, droop control methods are used to facilitate load sharing among different generation sources. This method compensates for both active and reactive power imbalances by adjusting the output voltage magnitude and frequency of the generating unit. Both P-ω and Q-V droops have been used in synchronous machines for decades. Similar droop controllers were used in this study to develop a control algorithm for a three-phase isolated (islanded) inverter. Controllers modeled in a synchronous dq reference frame were simulated in PLECS and validated with the hardware setup. A small-signal model based on an averaged model of the inverter was developed to study the system's dynamics. The accuracy of this mathematical model was then verified using the data obtained from the experimental and simulation results. This validated model is a useful tool for the further dynamic analysis of a microgrid.", "title": "" }, { "docid": "82e3ea7c86952d3fce88cdcea39a9bdf", "text": "Many efforts have been paid to enhance the security of Android. However, less attention has been given to how to practically adopt the enhancements on off-the-shelf devices. In particular, securing Android devices often requires modifying their write-protected underlying system component files (especially the system libraries) by flashing or rooting devices, which is unacceptable in many realistic cases. In this paper, a novel technique, called reference hijacking, is presented to address the problem. By introducing a specially designed reset procedure, a new execution environment is constructed for the target application, in which the reference to the underlying system libraries will be redirected to the security-enhanced alternatives. The technique can be applicable to both the Dalvik and Android Runtime (ART) environments and to almost all mainstream Android versions (2.x to 5.x). To demonstrate the capability of reference hijacking, we develop three prototype systems, PatchMan, ControlMan, and TaintMan, to enforce specific security enhancements, involving patching vulnerabilities, protecting inter-component communications, and performing dynamic taint analysis for the target application. These three prototypes have been successfully deployed on a number of popular Android devices from different manufacturers, without modifying the underlying system. The evaluation results show that they are effective and do not introduce noticeable overhead. They strongly support that reference hijacking can substantially improve the practicability of many security enhancement efforts for Android.", "title": "" }, { "docid": "7e557091d8cfe6209b1eda3b664ab551", "text": "With the increasing penetration of mobile phones, problematic use of mobile phone (PUMP) deserves attention. In this study, using a path model we examined the relationship between depression and PUMP, with motivations as mediators. Findings suggest that depressed people may rely on mobile phone to alleviate their negative feelings and spend more time on communication activities via mobile phone, which in turn can deteriorate into PUMP. However, face-to-face communication with others played a moderating role, weakening the link between use of mobile phone for communication activities and dete-", "title": "" }, { "docid": "3a90b8f46a8db30438ff54e5bd5e6b4c", "text": "To address the lack of systematic research on the nature and effectiveness of online retailing, a conceptual model is proposed which examines the potential influence of atmospheric qualities of a virtual store. The underlying premise is that, given the demonstrated impact of store environment on shopper behaviors and outcomes in a traditional retailing setting, such atmospheric cues are likely to play a role in the online shopping context. A Stimulus–Organism–Response (S–O–R) framework is used as the basis of the model which posits that atmospheric cues of the online store, through the intervening effects of affective and cognitive states, influence the outcomes of online retail shopping in terms of approach/avoidance behaviors. Two individual traits, involvement and atmospheric responsiveness, are hypothesized to moderate the relationship between atmospheric cues and shoppers’ affective and cognitive reactions. Propositions are derived and the research implications of the model are presented. D 2001 Elsevier Science Inc. All rights reserved.", "title": "" }, { "docid": "683107abf87d68a9bb6ab5a22e24cb99", "text": "We present supertagging-based models for Tree Adjoining Grammar parsing that use neural network architectures and dense vector representation of supertags (elementary trees) to achieve state-of-the-art performance in unlabeled and labeled attachment scores. The shift-reduce parsing model eschews lexical information entirely, and uses only the 1-best supertags to parse a sentence, providing further support for the claim that supertagging is “almost parsing.” We demonstrate that the embedding vector representations the parser induces for supertags possess linguistically interpretable structure, supporting analogies between grammatical structures like those familiar from recent work in distributional semantics. This dense representation of supertags overcomes the drawbacks for statistical models of TAG as compared to CCG parsing, raising the possibility that TAG is a viable alternative for NLP tasks that require the assignment of richer structural descriptions to sentences.", "title": "" }, { "docid": "b454900556cc392edd39b888de746298", "text": "As developers of a highly multilingual named entity recognition (NER) system, we face an evaluation resource bottleneck problem: we need evaluation data in many languages, the annotation should not be too time-consuming, and the evaluation results across languages should be comparable. We solve the problem by automatically annotating the English version of a multi-parallel corpus and by projecting the annotations into all the other language versions. For the translation of English entities, we use a phrase-based statistical machine translation system as well as a lookup of known names from a multilingual name database. For the projection, we incrementally apply different methods: perfect string matching, perfect consonant signature matching and edit distance similarity. The resulting annotated parallel corpus will be made available for reuse.", "title": "" }, { "docid": "6465daca71e18cb76ec5442fb94f625a", "text": "In this paper, we show how an open-source, language-independent proofreading tool has been built. Many languages lack contextual proofreading tools; for many others, only partial solutions are available. Using existing, largely language-independent tools and collaborative processes it is possible to develop a practical style and grammar checker and to fight the digital divide in countries where commercial linguistic application software is unavailable or too expensive for average users. The described solution depends on relatively easily available language resources and does not require a fully formalized grammar nor a deep parser, yet it can detect many frequent context-dependent spelling mistakes, as well as grammatical, punctuation, usage, and stylistic errors. Copyright q 2010 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "79ba789f34485b93accff0756e62657f", "text": "We describe a robust but simple algorithm to reconstruct a surface from a set of merged range scans. Our key contribution is the formulation of the surface reconstruction problem as an energy minimisation problem that explicitly models the scanning process. The adaptivity of the Delaunay triangulation is exploited by restricting the energy to inside/outside labelings of Delaunay tetrahedra. Our energy measures both the output surface quality and how well the surface agrees with soft visibility constraints. Such energy is shown to perfectly fit into the minimum s-t cuts optimisation framework, allowing fast computation of a globally optimal tetrahedra labeling, while avoiding the “shrinking bias” that usually plagues graph cuts methods. The behaviour of our method confronted to noise, undersampling and outliers is evaluated on several data sets and compared with other methods through different experiments: its strong robustness would make our method practical not only for reconstruction from range data but also from typically more difficult dense point clouds, resulting for instance from stereo image matching. Our effective modeling of the surface acquisition inverse problem, along with the unique combination of Delaunay triangulation and minimum s-t cuts, makes the computational requirements of the algorithm scale well with respect to the size of the input point cloud.", "title": "" }, { "docid": "16ff4e6bef26c6c64e204373c657aa26", "text": "We present the Mim-Solution's approach to the RecSys Challenge 2016, which ranked 2nd. The goal of the competition was to prepare job recommendations for the users of the website Xing.com.\n Our two phase algorithm consists of candidate selection followed by the candidate ranking. We ranked the candidates by the predicted probability that the user will positively interact with the job offer. We have used Gradient Boosting Decision Trees as the regression tool.", "title": "" }, { "docid": "f57830bce43b5e2518b8730ed2a648b6", "text": "We provide an overview of an architecture of today's Internet streaming media delivery networks and describe various problems that such systems pose with regard to video coding. We demonstrate that based on the distribution model (live or on-demand), the type of network delivery mechanism (unicast vs. multicast), and the optimization criteria associated with particular segments of the network (e.g. minimization of distortion for a given connection rate, minimization of traffic in the dedicated delivery network, etc.), it is possible to identify several models of communication that require different treatment from both source and channel coding perspectives. We explain how some of these problems can be addressed using a conventional framework of temporal motion-compensated, transform-based video compression algorithm, supported by appropriate channel-adaptation mechanisms in client and server components of a streaming media system. Most of these techniques have already been implemented in RealNetworks RealSystem 8 and its RealVideo 8 codec, which we are using to illustrate our results. We also provide a comparative study of the efficiency of our RealVideo 8 algorithm, and report improvements on the order of 0.5-2.0 dB relative to ITU-T H.263+ algorithm, and around 0.5-1.0 dB compared to ISO MPEG-4 codec (see Fig. 1.).", "title": "" }, { "docid": "294ac617bbd49afe95c278836fa4c9ec", "text": "We present a practical lock-free shared data structure that efficiently implements the operations of a concurrent deque as well as a general doubly linked list. The implementation supports parallelism for disjoint accesses and uses atomic primitives which are available in modern computer systems. Previously known lock-free algorithms of doubly linked lists are either based on non-available atomic synchronization primitives, only implement a subset of the functionality, or are not designed for disjoint accesses. Our algorithm only requires single-word compare-and-swap atomic primitives, supports fully dynamic list sizes, and allows traversal also through deleted nodes and thus avoids unnecessary operation retries. We have performed an empirical study of our new algorithm on two different multiprocessor platforms. Results of the experiments performed under high contention show that the performance of our implementation scales linearly with increasing number of processors. Considering deque implementations and systems with low concurrency, the algorithm by Michael shows the best performance. However, as our algorithm is designed for disjoint accesses, it performs significantly better on systems with high concurrency and non-uniform memory architecture. © 2008 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "3780ce6cea2524892c6a08dbd9407af9", "text": "Recent efforts in spatial and temporal data models and database systems attempt to achieve an appropriate kind of interaction between the two areas. This paper reviews the different types of spatio-temporal data models that have been proposed in the literature as well as new theories and concepts that have emerged. It provides an overview of previous achievements within the domain and critically evaluates the various approaches through the use of a case study and the construction of a comparison framework. This comparative review is followed by a comprehensive description of the new lines of research that emanate from the latest efforts inside the spatio-temporal research community.", "title": "" }, { "docid": "df609125f353505fed31eee302ac1742", "text": "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26].", "title": "" } ]
scidocsrr
7159a0b463dfff86ca78a8c0e097461f
1 Modeling , Control , and Flight Testing of a Small Ducted Fan Aircraft
[ { "docid": "57acb451289f7a55086a531e87b3437a", "text": "For autonomous helicopter flight, it is common to separate the flight control problem into an inner loop that controls attitude and an outer loop that controls the translational trajectory of the helicopter. In previous work, dynamic inversion and neural-network-based adaptation was used to increase performance of the attitude control system and the method of pseudocontrol hedging (PCH) was used to protect the adaptation process from actuator limits and dynamics. Adaptation to uncertainty in the attitude, as well as the translational dynamics, is introduced, thus minimizing the effects of model error in all six degrees of freedom and leading to more accurate position tracking. The PCH method is used in a novel way that enables adaptation to occur in the outer loop without interacting with the attitude dynamics. A pole-placement approach is used that alleviates timescale separation requirements, allowing the outer loop bandwidth to be closer to that of the inner loop, thus, increasing position tracking performance. A poor model of the attitude dynamics and a basic kinematics model is shown to be sufficient for accurate position tracking. The theory and implementation of such an approach, with a summary of flight test results, are described.", "title": "" } ]
[ { "docid": "f1e5e00fe3a0610c47918de526e87dc6", "text": "The current paper reviews research that has explored the intergenerational effects of the Indian Residential School (IRS) system in Canada, in which Aboriginal children were forced to live at schools where various forms of neglect and abuse were common. Intergenerational IRS trauma continues to undermine the well-being of today's Aboriginal population, and having a familial history of IRS attendance has also been linked with more frequent contemporary stressor experiences and relatively greater effects of stressors on well-being. It is also suggested that familial IRS attendance across several generations within a family appears to have cumulative effects. Together, these findings provide empirical support for the concept of historical trauma, which takes the perspective that the consequences of numerous and sustained attacks against a group may accumulate over generations and interact with proximal stressors to undermine collective well-being. As much as historical trauma might be linked to pathology, it is not possible to go back in time to assess how previous traumas endured by Aboriginal peoples might be related to subsequent responses to IRS trauma. Nonetheless, the currently available research demonstrating the intergenerational effects of IRSs provides support for the enduring negative consequences of these experiences and the role of historical trauma in contributing to present day disparities in well-being.", "title": "" }, { "docid": "8fe78f684d75005477e3a4b1e6cf78d1", "text": "Yamazaki et al. [1] investigated the effect of prediabetes on subsequent pancreatic fat accumulation, based on the hypothesis that pancreatic fat was a manifestation of disturbed glucose metabolism. Prediabetes was defined as fasting plasma glucose of 100–125 mg/dl or hemoglobin A1c of 5.7–6.4%, and the change of pancreatic fat was evaluated by computed tomography (CT). A total of 198 nondiabetic participants were composed of 48 prediabetes and 150 non-prediabetes participants. By multiple linear regression analysis, baseline prediabetes was associated with future pancreatic fat accumulation with beta value (95% confidence interval) of 3.14 (1.25–5.03). In addition, body mass index (BMI) and impaired fasting glucose (IFG) were also risk factors of pancreatic fat accumulation. I have some queries on their study. First, the authors used prediabetes or IFG as an independent variable for the change of pancreatic fat accumulation, by adjusting several variables. As impaired glucose tolerance (IGT) value could not be used as an independent variable, the lack of IGT information for the definition of prediabetes should be specified by further study [2]. Second, BMI was selected as a significant independent variable for the change of pancreatic fat accumulation. I suppose that the amount of visceral fat at baseline by CT could also be used as an independent variable. Although liver fat did not become a predictor, visceral fat as another obesity indicator should be checked for the analysis [3]. Finally, the authors selected multiple linear regression analysis. I think that the authors could use prediabetes indictors at baseline as continuous variables. In addition, the change of prediabetes information can be used in combination with the change of pancreatic fat accumulation. Anyway, further studies are needed to know the causal association to confirm the hypothesis that pancreatic fat is a manifestation of disturbed glucose metabolism.", "title": "" }, { "docid": "416a3d01c713a6e751cb7893c16baf21", "text": "BACKGROUND\nAnaemia is associated with poor cancer control, particularly in patients undergoing radiotherapy. We investigated whether anaemia correction with epoetin beta could improve outcome of curative radiotherapy among patients with head and neck cancer.\n\n\nMETHODS\nWe did a multicentre, double-blind, randomised, placebo-controlled trial in 351 patients (haemoglobin <120 g/L in women or <130 g/L in men) with carcinoma of the oral cavity, oropharynx, hypopharynx, or larynx. Patients received curative radiotherapy at 60 Gy for completely (R0) and histologically incomplete (R1) resected disease, or 70 Gy for macroscopically incompletely resected (R2) advanced disease (T3, T4, or nodal involvement) or for primary definitive treatment. All patients were assigned to subcutaneous placebo (n=171) or epoetin beta 300 IU/kg (n=180) three times weekly, from 10-14 days before and continuing throughout radiotherapy. The primary endpoint was locoregional progression-free survival. We assessed also time to locoregional progression and survival. Analysis was by intention to treat.\n\n\nFINDINGS\n148 (82%) patients given epoetin beta achieved haemoglobin concentrations higher than 140 g/L (women) or 150 g/L (men) compared with 26 (15%) given placebo. However, locoregional progression-free survival was poorer with epoetin beta than with placebo (adjusted relative risk 1.62 [95% CI 1.22-2.14]; p=0.0008). For locoregional progression the relative risk was 1.69 (1.16-2.47, p=0.007) and for survival was 1.39 (1.05-1.84, p=0.02).\n\n\nINTERPRETATION\nEpoetin beta corrects anaemia but does not improve cancer control or survival. Disease control might even be impaired. Patients receiving curative cancer treatment and given erythropoietin should be studied in carefully controlled trials.", "title": "" }, { "docid": "2693a2815adf4e731d87f9630cd7c427", "text": "A new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter consists of two stages. The first stage computes a fuzzy derivative for eight different directions. The second stage uses these fuzzy derivatives to perform fuzzy smoothing by weighting the contributions of neighboring pixel values. Both stages are based on fuzzy rules which make use of membership functions. The filter can be applied iteratively to effectively reduce heavy noise. In particular, the shape of the membership functions is adapted according to the remaining noise level after each iteration, making use of the distribution of the homogeneity in the image. A statistical model for the noise distribution can be incorporated to relate the homogeneity to the adaptation scheme of the membership functions. Experimental results are obtained to show the feasibility of the proposed approach. These results are also compared to other filters by numerical measures and visual inspection.", "title": "" }, { "docid": "ed77ce10f448cb58568a63089903a4a8", "text": "Sentence representation at the semantic level is a challenging task for Natural Language Processing and Artificial Intelligence. Despite the advances in word embeddings (i.e. word vector representations), capturing sentence meaning is an open question due to complexities of semantic interactions among words. In this paper, we present an embedding method, which is aimed at learning unsupervised sentence representations from unlabeled text. We propose an unsupervised method that models a sentence as a weighted series of word embeddings. The weights of the word embeddings are fitted by using Shannon’s word entropies provided by the Term Frequency–Inverse Document Frequency (TF–IDF) transform. The hyperparameters of the model can be selected according to the properties of data (e.g. sentence length and textual gender). Hyperparameter selection involves word embedding methods and dimensionalities, as well as weighting schemata. Our method offers advantages over existing methods: identifiable modules, short-term training, online inference of (unseen) sentence representations, as well as independence from domain, external knowledge and language resources. Results showed that our model outperformed the state of the art in well-known Semantic Textual Similarity (STS) benchmarks. Moreover, our model reached state-of-the-art performance when compared to supervised and knowledge-based STS systems. Corresponding author Email addresses: iaf@ciencias.unam.mx (Ignacio Arroyo-Fernández), cmendezc@ccg.unam.mx (Carlos-Francisco Méndez-Cruz), gsierram@ii.unam.mx (Gerardo Sierra), juan-manuel.torres@univ-avignon.fr (Juan-Manuel Torres-Moreno), sidorov@cic.ipn.mx (Grigori Sidorov) 1Av. Universidad s/n Col. Chamilpa 62210, Cuernavaca, Morelos 2AV. Universidad No. 3000, Ciudad universitaria, Coyoacán 04510, Ciudad de México 3Université d’Avignon et des Pays de Vaucluse. 339 chemin des Meinajaries 84911, Avignon cedex 9, France 4Instituto Politécnico Nacional. Av. Juan de Dios Bátiz, Esq. Miguel Othón de Mendizábal, Col. Nueva Industrial Vallejo, Gustavo A. Madero 07738, Ciudad de México Preprint submitted to Journal October 23, 2017", "title": "" }, { "docid": "40495cc96353f56481ed30f7f5709756", "text": "This paper reported the construction of partial discharge measurement system under influence of cylindrical metal particle in transformer oil. The partial discharge of free cylindrical metal particle in the uniform electric field under AC applied voltage was studied in this paper. The partial discharge inception voltage (PDIV) for the single particle was measure to be 11kV. The typical waveform of positive PD and negative PD was also obtained. The result shows that the magnitude of negative PD is higher compared to positive PD. The observation on cylindrical metal particle movement revealed that there were a few stages of motion process involved.", "title": "" }, { "docid": "ca8c40d523e0c64f139ae2a3221e8ea4", "text": "We propose Mixcoin, a protocol to facilitate anonymous payments in Bitcoin and similar cryptocurrencies. We build on the emergent phenomenon of currency mixes, adding an accountability mechanism to expose theft. We demonstrate that incentives of mixes and clients can be aligned to ensure that rational mixes will not steal. Our scheme is efficient and fully compatible with Bitcoin. Against a passive attacker, our scheme provides an anonymity set of all other users mixing coins contemporaneously. This is an interesting new property with no clear analog in better-studied communication mixes. Against active attackers our scheme offers similar anonymity to traditional communication mixes.", "title": "" }, { "docid": "5bb63d07c8d7c743c505e6fd7df3dc4f", "text": "XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for i) discovering the structural commonalities between sub-trees, ii) identifying sub-tree semantic resemblances, iii) computing tree-based edit operations costs, and iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance. © 2002 Elsevier Science. All rights reserved.", "title": "" }, { "docid": "20def85748f9d2f71cd34c4f0ca7f57c", "text": "Recent advances in artificial intelligence (AI) and machine learning, combined with developments in neuromorphic hardware technologies and ubiquitous computing, promote machines to emulate human perceptual and cognitive abilities in a way that will continue the trend of automation for several upcoming decades. Despite the gloomy scenario of automation as a job eliminator, we argue humans and machines can cross-fertilise in a way that forwards a cooperative coexistence. We build our argument on three pillars: (i) the economic mechanism of automation, (ii) the dichotomy of ‘experience’ that separates the first-person perspective of humans from artificial learning algorithms, and (iii) the interdependent relationship between humans and machines. To realise this vision, policy makers have to implement alternative educational approaches that support lifelong training and flexible job transitions.", "title": "" }, { "docid": "0cd5de737686006a5fb5530625810f0e", "text": "Conventional crack detecting inspections of structures have been mainly based on visual investigation methods. Huge and tall structures such as cable bridges, highrising towers, dams and industrial power plants are known to have its inaccessible area and limitation in field inspection due to its geometry. In some cases, inspection of critical structural members is not possible due to its spatial constraints. With rapid technical development of unmanned aerial vehicle (UAV), the limitation of conventional visual inspection could be overcome with advanced digital image processing technique. In this study, the crack detecting system using UAV and digital image processing techniques was developed Structure inspection system to detect cracks in structure.", "title": "" }, { "docid": "06998586aa57d1f9b11f7ff37cae0afb", "text": "Solar cell designs with complex metallization geometries such as metal wrap through (MWT), interdigitated-back-contact (IBC) cells, and metal grids with non-ideal features like finger breaks, finger striations and non-uniform contact resistance, are not amenable to simple series resistance (Rs) analysis based on small unit cells. In order to accurately simulate these cells, we developed a program that captures the cell metallization geometry from rastered images/CAD files, and efficiently meshes the cell plane for finite element analysis, yielding standard data such as the I-V curve, voltage and Rs distribution. The program also features a powerful post processor that predicts the rate of change in efficiency with respect to incremental changes in the metallization pattern, opening up the possibility of intelligent computer aided design procedures.", "title": "" }, { "docid": "f55ac9e319ad8b9782a34251007a5d06", "text": "The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.", "title": "" }, { "docid": "6e73ea43f02dc41b96e5d46bafe3541d", "text": "Learning discriminative representations for unseen person images is critical for person re-identification (ReID). Most of the current approaches learn deep representations in classification tasks, which essentially minimize the empirical classification risk on the training set. As shown in our experiments, such representations easily get over-fitted on a discriminative human body part on the training set. To gain the discriminative power on unseen person images, we propose a deep representation learning procedure named part loss network, to minimize both the empirical classification risk on training person images and the representation learning risk on unseen person images. The representation learning risk is evaluated by the proposed part loss, which automatically detects human body parts and computes the person classification loss on each part separately. Compared with traditional global classification loss, simultaneously considering part loss enforces the deep network to learn representations for different body parts and gain the discriminative power on unseen persons. Experimental results on three person ReID datasets, i.e., Market1501, CUHK03, and VIPeR, show that our representation outperforms existing deep representations.", "title": "" }, { "docid": "c033412bbc7ebb1c3d66ea7386033eec", "text": "Recent cryptanalytic attacks have exposed the vulnerabilities of some widely used cryptographic hash functions like MD5 and SHA-1. Attacks in the line of differential attacks have been used to expose the weaknesses of several other hash functions like RIPEMD, HAVAL. In this paper we propose a new efficient hash algorithm that provides a near random hash output and overcomes some of the earlier weaknesses. Extensive simulations and comparisons with some existing hash functions have been done to prove the effectiveness of the BSA, which is an acronym for the name of the 3 authors.", "title": "" }, { "docid": "9e45bc3ac789fd1343e4e400b7f0218e", "text": "Due to its successful application in recommender systems, collaborative filtering (CF) has become a hot research topic in data mining and information retrieval. In traditional CF methods, only the feedback matrix, which contains either explicit feedback (also called ratings) or implicit feedback on the items given by users, is used for training and prediction. Typically, the feedback matrix is sparse, which means that most users interact with few items. Due to this sparsity problem, traditional CF with only feedback information will suffer from unsatisfactory performance. Recently, many researchers have proposed to utilize auxiliary information, such as item content (attributes), to alleviate the data sparsity problem in CF. Collaborative topic regression (CTR) is one of these methods which has achieved promising performance by successfully integrating both feedback information and item content information. In many real applications, besides the feedback and item content information, there may exist relations (also known as networks) among the items which can be helpful for recommendation. In this paper, we develop a novel hierarchical Bayesian model called Relational Collaborative Topic Regression (RCTR), which extends CTR by seamlessly integrating the user-item feedback information, item content information, and network structure among items into the same model. Experiments on real-world datasets show that our model can achieve better prediction accuracy than the state-of-the-art methods with lower empirical training time. Moreover, RCTR can learn good interpretable latent structures which are useful for recommendation.", "title": "" }, { "docid": "38a4f83778adea564e450146060ef037", "text": "The last few years have seen a surge in the number of accurate, fast, publicly available dependency parsers. At the same time, the use of dependency parsing in NLP applications has increased. It can be difficult for a non-expert to select a good “off-the-shelf” parser. We present a comparative analysis of ten leading statistical dependency parsers on a multi-genre corpus of English. For our analysis, we developed a new web-based tool that gives a convenient way of comparing dependency parser outputs. Our analysis will help practitioners choose a parser to optimize their desired speed/accuracy tradeoff, and our tool will help practitioners examine and compare parser output.", "title": "" }, { "docid": "8b5bf8cf3832ac9355ed5bef7922fb5c", "text": "Determining one's own position by means of a smartphone is an important issue for various applications in the fields of personal navigation or location-based services. Places like large airports, shopping malls or extensive underground parking lots require personal navigation but satellite signals and GPS connection cannot be obtained. Thus, alternative or complementary systems are needed. In this paper a system concept to integrate a foot-mounted inertial measurement unit (IMU) with an Android smartphone is presented. We developed a prototype to demonstrate and evaluate the implementation of pedestrian strapdown navigation on a smartphone. In addition to many other approaches we also fuse height measurements from a barometric sensor in order to stabilize height estimation over time. A very low-cost single-chip IMU is used to demonstrate applicability of the outlined system concept for potential commercial applications. In an experimental study we compare the achievable accuracy with a commercially available IMU. The evaluation shows very competitive results on the order of a few percent of traveled distance. Comparing performance, cost and size of the presented IMU the outlined approach carries an enormous potential in the field of indoor pedestrian navigation.", "title": "" }, { "docid": "54bc219670a65cd98a4a64c2f4605174", "text": "The purpose of this article is to describe the similarities and differences between two approaches to grounded theory research: grounded theory as espoused by Glaser and grounded theory as espoused by Strauss and Corbin. The focus of the article is the controversy surrounding the use of axial coding. The author proposes a resolution to the controversy by suggesting that one does not need to view either approach as right or wrong; rather, the qualitative and grounded theory researcher can choose an approach, and that choice is based on the goal of the researcher's study. Examples of both approaches, from the author's research study on the experiences of living in a family with a child with attention deficit hyperactivity disorder (ADHD), are provided.", "title": "" }, { "docid": "f4f70276ef59f9b206558613c95b5a8b", "text": "We present a general approach to creating realistic swimming behavior for a given articulated creature body. The two main components of our method are creature/fluid simulation and the optimization of the creature motion parameters. We simulate two-way coupling between the fluid and the articulated body by solving a linear system that matches acceleration at fluid/solid boundaries and that also enforces fluid incompressibility. The swimming motion of a given creature is described as a set of periodic functions, one for each joint degree of freedom. We optimize over the space of these functions in order to find a motion that causes the creature to swim straight and stay within a given energy budget. Our creatures can perform path following by first training appropriate turning maneuvers through offline optimization and then selecting between these motions to track the given path. We present results for a clownfish, an eel, a sea turtle, a manta ray and a frog, and in each case the resulting motion is a good match to the real-world animals. We also demonstrate a plausible swimming gait for a fictional creature that has no real-world counterpart.", "title": "" }, { "docid": "176d1eeb8dd1e366431d8ad4bb7734a1", "text": "Online, reverse auctions are increasingly being utilized in industrial sourcing activities. This phenomenon represents a novel, emerging area of inquiry with significant implications for sourcing strategies. However, there is little systematic thinking or empirical evidence on the topic. In this paper, the use of these auctions in sourcing activities is reviewed and four key aspects are highlighted: (i) the differences from physical auctions or those of the theoretical literature, (ii) the conditions for using online, reverse auctions, (iii) methods for structuring the auctions, and (iv) evaluations of auction performance. Some empirical evidence on these issues is also provided. ONLINE, REVERSE AUCTIONS: ISSUES, THEMES, AND PROSPECTS FOR THE FUTURE INTRODUCTION For nearly the past decade, managers, analysts, researchers, and the business press have been remarking that, “The Internet will change everything.” And since the advent of the Internet, we have seen it challenge nearly every aspect of marketing practice. This raises the obligation to consider the consequences of the Internet to management practices, the theme of this special issue. Yet, it may take decades to fully understand the impact of the Internet on marketing practice, in general. This paper is one step in that direction. Specifically, I consider the impact of the Internet in a business-to-business context, the sourcing of direct and indirect materials from a supply base. It has been predicted that the Internet will bring about $1 trillion in efficiencies to the annual $7 trillion that is spent on the procurement of goods and services worldwide (USA Today, 2/7/00, B1). How and when this will happen remains an open question. However, one trend that is showing increasing promise is the use of online, reverse auctions. Virtually every major industry has begun to use and adopt these auctions on a regular basis (Smith 2002). During the late 1990s, slow-growth, manufacturing firms such as Boeing, SPX/Eaton, United Technologies, and branches of the United States military, utilized these auctions. Since then, consumer product companies such as Emerson Electronics, Nestle, and Quaker have followed suit. Even high-tech firms such as Dell, Hewlett-Packard, Intel, and Sun Microsystems have increased their usage of auctions in sourcing activities. And the intention and potential for the use of these auctions to continue to grow in the future is clear. In their annual survey of purchasing managers, Purchasing magazine found that 25% of its respondents expected to use reverse auctions in their sourcing efforts. Currently, the annual throughput in these auctions is estimated to be $40 billion; however, the addressable spend of the Global 500 firms is potentially $6.3 trillion.", "title": "" } ]
scidocsrr
c57a6eba91b8a580c51507bdbde2f9c2
Attitude estimation and control of a quadrocopter
[ { "docid": "adc9e237e2ca2467a85f54011b688378", "text": "Quadrotors are rapidly emerging as a popular platform for unmanned aerial vehicle (UAV) research, due to the simplicity of their construction and maintenance, their ability to hover, and their vertical take off and landing (VTOL) capability. Current designs have often considered only nominal operating conditions for vehicle control design. This work seeks to address issues that arise when deviating significantly from the hover flight regime. Aided by well established research for helicopter flight control, four separate aerodynamic effects are investigated as they pertain to quadrotor flight. The effects result from either translational or vertical vehicular velocity components, and cause both moments that affect attitude control and thrust variation that affects altitude control. Where possible, a theoretical development is first presented, and is then validated through both thrust test stand measurements and vehicle flight tests using the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC) quadrotor helicopter. The results have enabled improved controller tracking throughout the flight envelope, including at higher speeds and in gusting winds.", "title": "" } ]
[ { "docid": "27bd0bccf28931032558596dd4d8c2d3", "text": "We address the problem of classification in partially labeled networks (a.k.a. within-network classification) where observed class labels are sparse. Techniques for statistical relational learning have been shown to perform well on network classification tasks by exploiting dependencies between class labels of neighboring nodes. However, relational classifiers can fail when unlabeled nodes have too few labeled neighbors to support learning (during training phase) and/or inference (during testing phase). This situation arises in real-world problems when observed labels are sparse.\n In this paper, we propose a novel approach to within-network classification that combines aspects of statistical relational learning and semi-supervised learning to improve classification performance in sparse networks. Our approach works by adding \"ghost edges\" to a network, which enable the flow of information from labeled to unlabeled nodes. Through experiments on real-world data sets, we demonstrate that our approach performs well across a range of conditions where existing approaches, such as collective classification and semi-supervised learning, fail. On all tasks, our approach improves area under the ROC curve (AUC) by up to 15 points over existing approaches. Furthermore, we demonstrate that our approach runs in time proportional to L • E, where L is the number of labeled nodes and E is the number of edges.", "title": "" }, { "docid": "93a6c94a3ecb3fcaf363b07c077e5579", "text": "The state-of-the-art advancement in wind turbine condition monitoring and fault diagnosis for the recent several years is reviewed. Since the existing surveys on wind turbine condition monitoring cover the literatures up to 2006, this review aims to report the most recent advances in the past three years, with primary focus on gearbox and bearing, rotor and blades, generator and power electronics, as well as system-wise turbine diagnosis. There are several major trends observed through the survey. Due to the variable-speed nature of wind turbine operation and the unsteady load involved, time-frequency analysis tools such as wavelets have been accepted as a key signal processing tool for such application. Acoustic emission has lately gained much more attention in order to detect incipient failures because of the low-speed operation for wind turbines. There has been an increasing trend of developing model based reasoning algorithms for fault detection and isolation as cost-effective approach for wind turbines as relatively complicated system. The impact of unsteady aerodynamic load on the robustness of diagnostic signatures has been notified. Decoupling the wind load from condition monitoring decision making will reduce the associated down-time cost.", "title": "" }, { "docid": "f7a42937973a45ed4fb5d23e3be316a9", "text": "Domain specific information retrieval process has been a prominent and ongoing research in the field of natural language processing. Many researchers have incorporated different techniques to overcome the technical and domain specificity and provide a mature model for various domains of interest. The main bottleneck in these studies is the heavy coupling of domain experts, that makes the entire process to be time consuming and cumbersome. In this study, we have developed three novel models which are compared against a golden standard generated via the on line repositories provided, specifically for the legal domain. The three different models incorporated vector space representations of the legal domain, where document vector generation was done in two different mechanisms and as an ensemble of the above two. This study contains the research being carried out in the process of representing legal case documents into different vector spaces, whilst incorporating semantic word measures and natural language processing techniques. The ensemble model built in this study, shows a significantly higher accuracy level, which indeed proves the need for incorporation of domain specific semantic similarity measures into the information retrieval process. This study also shows, the impact of varying distribution of the word similarity measures, against varying document vector dimensions, which can lead to improvements in the process of legal information retrieval. keywords: Document Embedding, Deep Learning, Information Retrieval", "title": "" }, { "docid": "446c1bf541dbed56f8321b8024391b8c", "text": "Tokenisation has been adopted by the payment industry as a method to prevent Personal Account Number (PAN) compromise in EMV (Europay MasterCard Visa) transactions. The current architecture specified in EMV tokenisation requires online connectivity during transactions. However, it is not always possible to have online connectivity. We identify three main scenarios where fully offline transaction capability is considered to be beneficial for both merchants and consumers. Scenarios include making purchases in locations without online connectivity, when a reliable connection is not guaranteed, and when it is cheaper to carry out offline transactions due to higher communication/payment processing costs involved in online approvals. In this study, an offline contactless mobile payment protocol based on EMV tokenisation is proposed. The aim of the protocol is to address the challenge of providing secure offline transaction capability when there is no online connectivity on either the mobile or the terminal. The solution also provides end-to-end encryption to provide additional security for transaction data other than the token. The protocol is analysed against protocol objectives and we discuss how the protocol can be extended to prevent token relay attacks. The proposed solution is subjected to mechanical formal analysis using Scyther. Finally, we implement the protocol and obtain performance measurements.", "title": "" }, { "docid": "4bb2741e663c6cf85adf3bf77226ac92", "text": "Fresh water and arable land are essential for agricultural production and food processing. However, managing conflicting demands over water and land can be challenging for business leaders, environmentalists and other stakeholders. This paper characterizes these challenges as wicked problems. Wicked problems are ill-formed, fuzzy, and messy, because they involve many clients and decisions makers with conflicting values. They are also not solvable, but rather must be managed. How can agribusiness leaders effectively manage wicked problems, especially if they have little practice in doing so? This paper argues that a Community of Practice (CoP) and its tripartite elements of domain, community and practice can be effective in helping businesses manage wicked problems by focusing on the positive links between environmental stewardship and economic performance. Empirically, the paper examines three agribusinesses to assess the extent in which CoP is used as a strategy for sustainable water management.", "title": "" }, { "docid": "125c145b143579528279e76d23fa3054", "text": "Social unrest is endemic in many societies, and recent news has drawn attention to happenings in Latin America, the Middle East, and Eastern Europe. Civilian populations mobilize, sometimes spontaneously and sometimes in an organized manner, to raise awareness of key issues or to demand changes in governing or other organizational structures. It is of key interest to social scientists and policy makers to forecast civil unrest using indicators observed on media such as Twitter, news, and blogs. We present an event forecasting model using a notion of activity cascades in Twitter (proposed by Gonzalez-Bailon et al., 2011) to predict the occurrence of protests in three countries of Latin America: Brazil, Mexico, and Venezuela. The basic assumption is that the emergence of a suitably detected activity cascade is a precursor or a surrogate to a real protest event that will happen \"on the ground.\" Our model supports the theoretical characterization of large cascades using spectral properties and uses properties of detected cascades to forecast events. Experimental results on many datasets, including the recent June 2013 protests in Brazil, demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "b7a3a7af3495d0a722040201f5fadd55", "text": "During the last decade, biodegradable metallic stents have been developed and investigated as alternatives for the currently-used permanent cardiovascular stents. Degradable metallic materials could potentially replace corrosion-resistant metals currently used for stent application as it has been shown that the role of stenting is temporary and limited to a period of 6-12 months after implantation during which arterial remodeling and healing occur. Although corrosion is generally considered as a failure in metallurgy, the corrodibility of certain metals can be an advantage for their application as degradable implants. The candidate materials for such application should have mechanical properties ideally close to those of 316L stainless steel which is the gold standard material for stent application in order to provide mechanical support to diseased arteries. Non-toxicity of the metal itself and its degradation products is another requirement as the material is absorbed by blood and cells. Based on the mentioned requirements, iron-based and magnesium-based alloys have been the investigated candidates for biodegradable stents. This article reviews the recent developments in the design and evaluation of metallic materials for biodegradable stents. It also introduces the new metallurgical processes which could be applied for the production of metallic biodegradable stents and their effect on the properties of the produced metals.", "title": "" }, { "docid": "f15f72e8b513b0a9b7ddb9b73a559571", "text": "Teenagers are among the most prolific users of social network sites (SNS). Emerging studies find that youth spend a considerable portion of their daily life interacting through social media. Subsequently, questions and controversies emerge about the effects SNS have on adolescent development. This review outlines the theoretical frameworks researchers have used to understand adolescents and SNS. It brings together work from disparate fields that examine the relationship between SNS and social capital, privacy, youth safety, psychological well-being, and educational achievement.These research strands speak to high-profile concerns and controversies that surround youth participation in these online communities, and offer ripe areas for future research.", "title": "" }, { "docid": "d4a51def80ebbb09cca88b98fbdcfdfb", "text": "A central tenet underlying the use of plant preparations is that herbs contain many bioactive compounds. Cannabis contains tetrahydrocannabinols (THC) a primary metabolite with reported psychotropic effects. Therefore, the presence of THC makes controversial the use of Cannabis to treat diseases by which their uses and applications were limited. The question then is: is it possible to use the extracts from Cannabis to treat the diseases related with it use in folk medicine? More recently, the synergistic contributions of bioactive constituents have been scientifically demonstrated. We reviewed the literature concerning medical cannabis and its secondary metabolites, including fraction and total extracts. Scientific evidence shows that secondary metabolites in cannabis may enhance the positive effects of THC a primary metabolite. Other chemical components (cannabinoid and non-cannabinoid) in cannabis or its extracts may reduce THC-induced anxiety, cholinergic deficits, and immunosuppression; which could increase its therapeutic potential. Particular attention will be placed on noncannabinoid compounds interactions that could produce synergy with respect to treatment of pain, inflammation, epilepsy, fungal and bacterial infections. The evidence accessible herein pointed out for the possible synergism that might occur involving the main phytocompounds with each other or with other minor components.", "title": "" }, { "docid": "81ef390009fb64bf235147bc0e186bab", "text": "In this paper, we show how to calibrate a camera and to recover the geometry and the photometry (textures) of objects from a single image. The aim of this work is to make it possible walkthrough and augment reality in a 3D model reconstructed from a single image. The calibration step does not need any calibration target and makes only four assumptions: (1) the single image contains at least two vanishing points, (2) the length (in 3D space) of one line segment (for determining the translation vector) in the image is known, (3) the principle point is the center of the image, and (4) the aspect ratio is fixed by the user. Each vanishing point is determined from a set of parallel lines. These vanishing points help determine a 3D world coordinate system R o. After having computed the focal length, the rotation matrix and the translation vector are evaluated in turn for describing the rigid motion between R o and the camera coordinate system R c. Next, the reconstruction step consists in placing, rotating, scaling, and translating a rectangular 3D box that must fit at best with the potential objects within the scene as seen through the single image. With each face of a rectangular box, a texture that may contain holes due to invisible parts of certain objects is assigned. We show how the textures are extracted and how these holes are located and filled. Our method has been applied to various real images (pictures scanned from books, photographs) and synthetic images.", "title": "" }, { "docid": "5fb640a9081f72fcf994b1691470d7bc", "text": "Omnidirectional cameras are widely used in such areas as robotics and virtual reality as they provide a wide field of view. Their images are often processed with classical methods, which might unfortunately lead to non-optimal solutions as these methods are designed for planar images that have different geometrical properties than omnidirectional ones. In this paper we study image classification task by taking into account the specific geometry of omnidirectional cameras with graph-based representations. In particular, we extend deep learning architectures to data on graphs; we propose a principled way of graph construction such that convolutional filters respond similarly for the same pattern on different positions of the image regardless of lens distortions. Our experiments show that the proposed method outperforms current techniques for the omnidirectional image classification problem.", "title": "" }, { "docid": "2803bbd080e761349cffd9ba5d5ec274", "text": "BACKGROUND\nSeveral triage systems have been developed for use in the emergency department (ED), however they are not designed to detect deterioration in patients. Deteriorating patients may be at risk of going undetected during their ED stay and are therefore vulnerable to develop serious adverse events (SAEs). The national early warning score (NEWS) has a good ability to discriminate ward patients at risk of SAEs. The utility of NEWS had not yet been studied in an ED.\n\n\nOBJECTIVE\nTo explore the performance of the NEWS in an ED with regard to predicting adverse outcomes.\n\n\nDESIGN\nA prospective observational study. Patients Eligible patients were those presenting to the ED during the 6 week study period with an Emergency Severity Index (ESI) of 2 and 3 not triaged to the resuscitation room.\n\n\nINTERVENTION\nNEWS was documented at three time points: on arrival (T0), hour after arrival (T1) and at transfer to the general ward/ICU (T2). The outcomes of interest were: hospital admission, ICU admission, length of stay and 30 day mortality.\n\n\nRESULTS\nA total of 300 patients were assessed for eligibility. Complete data was able to be collected for 274 patients on arrival at the ED. NEWS was significantly correlated with patient outcomes, including 30 day mortality, hospital admission, and length of stay at all-time points.\n\n\nCONCLUSION\nThe NEWS measured at different time points was a good predictor of patient outcomes and can be of additional value in the ED to longitudinally monitor patients throughout their stay in the ED and in the hospital.", "title": "" }, { "docid": "ea739d96ee0558fb23f0a5a020b92822", "text": "Text and structural data mining of web and social media (WSM) provides a novel disease surveillance resource and can identify online communities for targeted public health communications (PHC) to assure wide dissemination of pertinent information. WSM that mention influenza are harvested over a 24-week period, 5 October 2008 to 21 March 2009. Link analysis reveals communities for targeted PHC. Text mining is shown to identify trends in flu posts that correlate to real-world influenza-like illness patient report data. We also bring to bear a graph-based data mining technique to detect anomalies among flu blogs connected by publisher type, links, and user-tags.", "title": "" }, { "docid": "1ab4f605d67dabd3b2815a39b6123aa4", "text": "This paper examines and provides the theoretical evidence of the feasibility of 60 GHz mmWave in wireless body area networks (WBANs), by analyzing its properties. It has been shown that 60 GHz based communication could better fit WBANs compared to traditional 2.4 GHz based communication because of its compact network coverage, miniaturized devices, superior frequency reuse, multi-gigabyte transmission rate and the therapeutic merits for human health. Since allowing coexistence among the WBANs can enhance the efficiency of the mmWave based WBANs, we formulated the coexistence problem as a non-cooperative distributed power control game. This paper proves the existence of Nash equilibrium (NE) and derives the best response move as a solution. The efficiency of the NE is also improved by modifying the utility function and introducing a pair of pricing factors. Our simulation results indicate that the proposed pricing policy significantly improves the efficiency in terms of Pareto optimality and social optimality.", "title": "" }, { "docid": "68c7509ec0261b1ddccef7e3ad855629", "text": "This research comprehensively illustrates the design, implementation and evaluation of a novel marker less environment tracking technology for an augmented reality based indoor navigation application, adapted to efficiently operate on a proprietary head-mounted display. Although the display device used, Google Glass, had certain pitfalls such as short battery life, slow processing speed, and lower quality visual display but the tracking technology was able to complement these limitations by rendering a very efficient, precise, and intuitive navigation experience. The performance assessments, conducted on the basis of efficiency and accuracy, substantiated the utility of the device for everyday navigation scenarios, whereas a later conducted subjective evaluation of handheld and wearable devices also corroborated the wearable as the preferred device for indoor navigation.", "title": "" }, { "docid": "a26ca28fb8e67e8ce74cc8589a5116ca", "text": "Recently, there has been a growing interest in using online technologies to design protocols for secure electronic voting. The main challenges include vote privacy and anonymity, ballot irrevocability and transparency throughout the vote counting process. The introduction of the blockchain as a basis for cryptocurrency protocols, provides for the exploitation of the immutability and transparency properties of these distributed ledgers.\n In this paper, we discuss possible uses of the blockchain technology to implement a secure and fair voting system. In particular, we introduce a secret share-based voting system on the blockchain, the so-called SHARVOT protocol1. Our solution uses Shamir's Secret Sharing to enable on-chain, i.e. within the transactions script, votes submission and winning candidate determination. The protocol is also using a shuffling technique, Circle Shuffle, to de-link voters from their submissions.", "title": "" }, { "docid": "d40aa76e76c44da4c6237f654dcdab45", "text": "The flipped classroom pedagogy has achieved significant mention in academic circles in recent years. \"Flipping\" involves the reinvention of a traditional course so that students engage with learning materials via recorded lectures and interactive exercises prior to attending class and then use class time for more interactive activities. Proper implementation of a flipped classroom is difficult to gauge, but combines successful techniques for distance education with constructivist learning theory in the classroom. While flipped classrooms are not a novel concept, technological advances and increased comfort with distance learning have made the tools to produce and consume course materials more pervasive. Flipped classroom experiments have had both positive and less-positive results and are generally measured by a significant improvement in learning outcomes. This study, however, analyzes the opinions of students in a flipped sophomore-level information technology course by using a combination of surveys and reflective statements. The author demonstrates that at the outset students are new - and somewhat receptive - to the concept of the flipped classroom. By the conclusion of the course satisfaction with the pedagogy is significant. Finally, student feedback is provided in an effort to inform instructors in the development of their own flipped classrooms.", "title": "" }, { "docid": "6ed9425f8d5be786cce530b45f22cd00", "text": "This paper presents a novel unsupervised method to transfer the style of an example image to a source image. The complex notion of image style is here considered as a local texture transfer, eventually coupled with a global color transfer. For the local texture transfer, we propose a new method based on an adaptive patch partition that captures the style of the example image and preserves the structure of the source image. More precisely, this example-based partition predicts how well a source patch matches an example patch. Results on various images show that our method outperforms the most recent techniques.", "title": "" }, { "docid": "2831276f8c6141db0c1ef8f41e125efc", "text": "Research on event detection in Twitter is often obstructed by the lack of publicly-available evaluation mechanisms such as test collections; this problem is more severe when considering the scarcity of them in languages other than English. In this paper, we present EveTAR, the first publicly-available test collection for event detection in Arabic tweets. The collection includes a crawl of 590M Arabic tweets posted in a month period and covers 66 significant events (in 8 different categories) for which more than 134k relevance judgments were gathered using crowdsourcing with high average inter-annotator agreement (Kappa value of 0.6). We demonstrate the usability of the collection by evaluating 3 state-of-the-art event detection algorithms. The collection is also designed to support other retrieval tasks, as we show in our experiments with ad-hoc search systems.", "title": "" }, { "docid": "7a4bf293b22a405c4b3c41a914bc7f3f", "text": "Sutton, Szepesvári and Maei (2009) recently introduced the first temporal-difference learning algorithm compatible with both linear function approximation and off-policy training, and whose complexity scales only linearly in the size of the function approximator. Although their gradient temporal difference (GTD) algorithm converges reliably, it can be very slow compared to conventional linear TD (on on-policy problems where TD is convergent), calling into question its practical utility. In this paper we introduce two new related algorithms with better convergence rates. The first algorithm, GTD2, is derived and proved convergent just as GTD was, but uses a different objective function and converges significantly faster (but still not as fast as conventional TD). The second new algorithm, linear TD with gradient correction, or TDC, uses the same update rule as conventional TD except for an additional term which is initially zero. In our experiments on small test problems and in a Computer Go application with a million features, the learning rate of this algorithm was comparable to that of conventional TD. This algorithm appears to extend linear TD to off-policy learning with no penalty in performance while only doubling computational requirements.", "title": "" } ]
scidocsrr
66f601214b358723d92e1129fe62dea0
A left ventricular segmentation method on 3D echocardiography using deep learning and snake
[ { "docid": "edca46ea20740bd15b9f3d4259093ac6", "text": "The segmentation of the right ventricle (RV) myocardium on MRI is a prerequisite step for the evaluation of RV structure and function, which is of great importance in the diagnose of most cardiac diseases, such as pulmonary hypertension, congenital heart disease, coronary heart disease, and dysplasia. However, RV segmentation is considered challenging, mainly because of the complex crescent shape of the RV across slices and phases. Hence this study aims to propose a new approach to segment RV endocardium and epicardium based on deep learning. The proposed method contains two subtasks: (1) localizing the region of interest (ROI), the biventricular region which contains more meaningful features and can facilitate the RV segmentation, and (2) segmenting the RV myocardium based on the localization. The two subtasks are integrated into a joint task learning framework, in which each task is solved via two multilayer convolutional neural networks. The experiments results show that the proposed method has big potential to be further researched and applied in clinical diagnosis.", "title": "" }, { "docid": "30999096bc27a495fa15a4e5b4e9980c", "text": "We present a new statistical pattern recognition approach for the problem of left ventricle endocardium tracking in ultrasound data. The problem is formulated as a sequential importance resampling algorithm such that the expected segmentation of the current time step is estimated based on the appearance, shape, and motion models that take into account all previous and current images and previous segmentation contours produced by the method. The new appearance and shape models decouple the affine and nonrigid segmentations of the left ventricle to reduce the running time complexity. The proposed motion model combines the systole and diastole motion patterns and an observation distribution built by a deep neural network. The functionality of our approach is evaluated using a dataset of diseased cases containing 16 sequences and another dataset of normal cases comprised of four sequences, where both sets present long axis views of the left ventricle. Using a training set comprised of diseased and healthy cases, we show that our approach produces more accurate results than current state-of-the-art endocardium tracking methods in two test sequences from healthy subjects. Using three test sequences containing different types of cardiopathies, we show that our method correlates well with interuser statistics produced by four cardiologists.", "title": "" } ]
[ { "docid": "fcc088a7d2d3f7279fc4a7b740254341", "text": "Material irradiation experiment is dangerous and complex, thus it requires those with a vast advanced expertise to process the images and data manually. In this paper, we propose a generative adversarial model based on prior knowledge and attention mechanism to achieve the generation of irradiated material images (datato-image model), and a prediction model for corresponding industrial performance (image-to-data model). With the proposed models, researchers can skip the dangerous and complex irradiation experiments and obtain the irradiation images and industrial performance parameters directly by inputing some experimental parameters only. We also introduce a new dataset ISMD which contains 22000 irradiated images with 22,143 sets of corresponding parameters. Our model achieved high quality results by compared with several baseline models. The evaluation and detailed analysis are also performed.", "title": "" }, { "docid": "9d37fa004b92180faccf7d8e22452919", "text": "Modern AI and robotic systems are characterized by a high and ever-increasing level of autonomy. At the same time, their applications in fields such as autonomous driving, service robotics and digital personal assistants move closer to humans. From the combination of both developments emerges the field of AI ethics which recognizes that the actions of autonomous machines entail moral dimensions and tries to answer the question of how we can build moral machines. In this paper we argue for taking inspiration from Aristotelian virtue ethics by showing that it forms a suitable combination with modern AI due to its focus on learning from experience. We furthermore propose that imitation learning from moral exemplars, a central concept in virtue ethics, can solve the value alignment problem. Finally, we show that an intelligent system endowed with the virtues of temperance and friendship to humans would not pose a control problem as it would not have the desire for limitless", "title": "" }, { "docid": "fbdbc870a78d9ee19446f3bb57731688", "text": "Because of the intangible and highly uncertain nature of innovation, investors may have difficulty processing information associated with a firm’s innovation and innovation search strategy. Due to cognitive and strategic biases, investors are likely to pay more attention to novel and explorative patents rather than incremental and exploitative patents. We find that firms focusing on exploitation rather than exploration tend to generate superior subsequent operating performance. Analysts do not seem to detect this, as firms currently focused on exploitation tend to outperform the market’s near-term earnings expectations. The market also seems unable to accurately incorporate innovation strategy information. We find that firms with exploitation strategies are undervalued relative to firms with exploration strategies and that this return differential is incremental to standard risk and innovation-based pricing factors examined in the prior literature. This result suggests a more nuanced view on whether stock market pressure hampers innovation.", "title": "" }, { "docid": "8fd049da24568dea2227483415532f9b", "text": "The notion of “semiotic scaffolding”, introduced into the semiotic discussions by Jesper Hoffmeyer in December of 2000, is proving to be one of the single most important concepts for the development of semiotics as we seek to understand the full extent of semiosis and the dependence of evolution, particularly in the living world, thereon. I say “particularly in the living world”, because there has been from the first a stubborn resistance among semioticians to seeing how a semiosis prior to and/or independent of living beings is possible. Yet the universe began in a state not only lifeless but incapable of supporting life, and somehow “moved” from there in the direction of being able to sustain life and finally of actually doing so. Wherever dyadic interactions result indirectly in a new condition that either moves the universe closer to being able to sustain life, or moves life itself in the direction not merely of sustaining itself but opening the way to new forms of life, we encounter a “thirdness” in nature of exactly the sort that semiosic triadicity alone can explain. This is the process, both within and without the living world, that requires scaffolding. This essay argues that a fuller understanding of this concept shows why “semiosis” says clearly what “evolution” says obscurely.", "title": "" }, { "docid": "a44c1d66db443d44850044b3b20a9cae", "text": "In this paper, a dual-polarized microstrip array antenna with orthogonal feed circuit is proposed. The proposed microstrip array antenna consists of a single substrate layer. The proposed array antenna has microstrip antenna elements, microstrip lines, air-bridges and cross slot lines. For dual polarization, an orthogonal feed circuit uses the Both-Sided MIC Technology including air-bridges. The Both-Sided MIC Technology is one of the useful MIC technologies for realizing a simple feed circuit. The air-bridges are often used for MMICs because it is possible to reduce the circuit complexity. The characteristics of proposed array antenna are investigated by both the simulation and the experiment. Consequently, it is confirmed that the proposed array antenna with the orthogonal feed circuit has dual polarization performance with very simple structure. The proposed array antenna will be a basic technology to realize high performance and attractive multifunction antennas.", "title": "" }, { "docid": "43bd1291999003acef4a6ac726219a91", "text": "Personalized services have greater impact on user experience to effect the level of user satisfaction. Many approaches provide personalized services in the form of an adaptive user interface. The focus of these approaches is limited to specific domains rather than a generalized approach applicable to every domain. In this paper, we proposed a domain and device-independent model-based adaptive user interfacing methodology. Unlike state-of-the-art approaches, the proposed methodology is dependent on the evaluation of user context and user experience (UX). The proposed methodology is implemented as an adaptive UI/UX authoring (A-UI/UX-A) tool; a system capable of adapting user interface based on the utilization of contextual factors, such as user disabilities, environmental factors (e.g. light level, noise level, and location) and device use, at runtime using the adaptation rules devised for rendering the adapted interface. To validate effectiveness of the proposed A-UI/UX-A tool and methodology, user-centric and statistical evaluation methods are used. The results show that the proposed methodology outperforms the existing approaches in adapting user interfaces by utilizing the users context and experience.", "title": "" }, { "docid": "d95fb46b3857b55602af2cf271300f5a", "text": "This paper proposes a new active interphase transformer for 24-pulse diode rectifier. The proposed scheme injects a compensation current into the secondary winding of either of the two first-stage interphase transformers. For only one of the first-stage interphase transformers being active, the inverter conducted the injecting current is with a lower kVA rating [1.26% pu (Po)] compared to conventional active interphase transformers. Moreover, the proposal scheme draws near sinusoidal input currents and the simulated and the experimental total harmonic distortion of overall line currents are only 1.88% and 2.27% respectively. When the inverter malfunctions, the input line current still can keep in the conventional 24-pulse situation. A digital-signal-processor (DSP) based digital controller is employed to calculate the desired compensation current and deals with the trigger signals needed for the inverter. Moreover, a 6kW prototype is built for test. Both simulation and experimental results demonstrate the validity of the proposed scheme.", "title": "" }, { "docid": "b0709248d08564b7d1a1f23243aa0946", "text": "TrustZone-based Real-time Kernel Protection (TZ-RKP) is a novel system that provides real-time protection of the OS kernel using the ARM TrustZone secure world. TZ-RKP is more secure than current approaches that use hypervisors to host kernel protection tools. Although hypervisors provide privilege and isolation, they face fundamental security challenges due to their growing complexity and code size. TZ-RKP puts its security monitor, which represents its entire Trusted Computing Base (TCB), in the TrustZone secure world; a safe isolated environment that is dedicated to security services. Hence, the security monitor is safe from attacks that can potentially compromise the kernel, which runs in the normal world. Using the secure world for kernel protection has been crippled by the lack of control over targets that run in the normal world. TZ-RKP solves this prominent challenge using novel techniques that deprive the normal world from the ability to control certain privileged system functions. These functions are forced to route through the secure world for inspection and approval before being executed. TZ-RKP's control of the normal world is non-bypassable. It can effectively stop attacks that aim at modifying or injecting kernel binaries. It can also stop attacks that involve modifying the system memory layout, e.g, through memory double mapping. This paper presents the implementation and evaluation of TZ-RKP, which has gone through rigorous and thorough evaluation of effectiveness and performance. It is currently deployed on the latest models of the Samsung Galaxy series smart phones and tablets, which clearly demonstrates that it is a practical real-world system.", "title": "" }, { "docid": "0fd7a70c0d46100d32e0bcb0f65528e3", "text": "INTRODUCTION Document clustering is an automatic grouping of text documents into clusters so that documents within a cluster have high similarity in comparison to one another, but are dissimilar to documents in other clusters. Unlike document classification (Wang, Zhou, and He, 2001), no labeled documents are provided in clustering; hence, clustering is also known as unsupervised learning. Hierarchical document clustering organizes clusters into a tree or a hierarchy that facilitates browsing. The parent-child relationship among the nodes in the tree can be viewed as a topic-subtopic relationship in a subject hierarchy such as the Yahoo! directory. This chapter discusses several special challenges in hierarchical document clustering: high dimensionality, high volume of data, ease of browsing, and meaningful cluster labels. State-ofthe-art document clustering algorithms are reviewed: the partitioning method (Steinbach, Karypis, and Kumar, 2000), agglomerative and divisive hierarchical clustering (Kaufman and Rousseeuw, 1990), and frequent itemset-based hierarchical clustering (Fung, Wang, and Ester, 2003). The last one, which was recently developed by the authors, is further elaborated since it has been specially designed to address the hierarchical document clustering problem.", "title": "" }, { "docid": "5e7b935a73180c9ccad3bc0e82311503", "text": "What happens if one pushes a cup sitting on a table toward the edge of the table? How about pushing a desk against a wall? In this paper, we study the problem of understanding the movements of objects as a result of applying external forces to them. For a given force vector applied to a specific location in an image, our goal is to predict long-term sequential movements caused by that force. Doing so entails reasoning about scene geometry, objects, their attributes, and the physical rules that govern the movements of objects. We design a deep neural network model that learns long-term sequential dependencies of object movements while taking into account the geometry and appearance of the scene by combining Convolutional and Recurrent Neural Networks. Training our model requires a large-scale dataset of object movements caused by external forces. To build a dataset of forces in scenes, we reconstructed all images in SUN RGB-D dataset in a physics simulator to estimate the physical movements of objects caused by external forces applied to them. Our Forces in Scenes (ForScene) dataset contains 10,335 images in which a variety of external forces are applied to different types of objects resulting in more than 65,000 object movements represented in 3D. Our experimental evaluations show that the challenging task of predicting longterm movements of objects as their reaction to external forces is possible from a single image.", "title": "" }, { "docid": "609b1df5196de8809b6293a481868c93", "text": "In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of a robot in a slippery environment, the robot was equipped with an AOFS along with two conventional wheel encoders. To estimate the orientation of the robot, we adopted a forward-viewing mono-camera and a gyroscope. In a very low luminance environment, it is hard to conduct conventional feature extraction and matching for localization. Instead, the interior space structure from an image and robot orientation was assessed. To enhance the appearance of image boundary, rolling guidance filter was applied after the histogram equalization. The proposed system was developed to be operable on a low-cost processor and implemented on a consumer robot. Experiments were conducted in low illumination condition of 0.1 lx and carpeted environment. The robot moved for 20 times in a 1.5 × 2.0 m square trajectory. When only wheel encoders and a gyroscope were used for robot localization, the maximum position error was 10.3 m and the maximum orientation error was 15.4°. Using the proposed system, the maximum position error and orientation error were found as 0.8 m and within 1.0°, respectively.", "title": "" }, { "docid": "414f3647551a4cadeb05143d30230dec", "text": "Future cellular networks are faced with the challenge of coping with significant traffic growth without increasing operating costs. Network virtualization and Software Defined Networking (SDN) are emerging solutions for fine-grained control and management of networks. In this article, we present a new dynamic tunnel switching technique for SDN-based cellular core networks. The technique introduces a virtualized Evolved Packet Core (EPC) gateway with the capability to select and dynamically switch the user plane processing element for each user. Dynamic GPRS Tunneling Protocol (GTP) termination enables switching the mobility anchor of an active session between a cloud environment, where general purpose hardware is in use, and a fast path implemented with dedicated hardware. We describe a prototype implementation of the technique based on an OpenStack cloud, an OpenFlow controller with GTP tunnel switching, and a dedicated fast path element.", "title": "" }, { "docid": "35b286999957396e1f5cab6e2370ed88", "text": "Text summarization condenses a text to a shorter version while retaining the important informations. Abstractive summarization is a recent development that generates new phrases, rather than simply copying or rephrasing sentences within the original text. Recently neural sequence-to-sequence models have achieved good results in the field of abstractive summarization, which opens new possibilities and applications for industrial purposes. However, most practitioners observe that these models still use large parts of the original text in the output summaries, making them often similar to extractive frameworks. To address this drawback, we first introduce a new metric to measure how much of a summary is extracted from the input text. Secondly, we present a novel method, that relies on a diversity factor in computing the neural network loss, to improve the diversity of the summaries generated by any neural abstractive model implementing beam search. Finally, we show that this method not only makes the system less extractive, but also improves the overall rouge score of state-of-the-art methods by at least 2 points.", "title": "" }, { "docid": "f48639ad675b863a28bb1bc773664ab0", "text": "The definition and phenomenological features of 'burnout' and its eventual relationship with depression and other clinical conditions are reviewed. Work is an indispensable way to make a decent and meaningful way of living, but can also be a source of stress for a variety of reasons. Feelings of inadequate control over one's work, frustrated hopes and expectations and the feeling of losing of life's meaning, seem to be independent causes of burnout, a term that describes a condition of professional exhaustion. It is not synonymous with 'job stress', 'fatigue', 'alienation' or 'depression'. Burnout is more common than generally believed and may affect every aspect of the individual's functioning, have a deleterious effect on interpersonal and family relationships and lead to a negative attitude towards life in general. Empirical research suggests that burnout and depression are separate entities, although they may share several 'qualitative' characteristics, especially in the more severe forms of burnout, and in vulnerable individuals, low levels of satisfaction derived from their everyday work. These final issues need further clarification and should be the focus of future clinical research.", "title": "" }, { "docid": "037ea3bdc1adf619a3e2cccf6fb113c5", "text": "This chapter focuses on the expression of ideologies in various structures of text and talk. It is situated within the broader framework of a research project on discourse and ideology which has been conducted at the University of Amsterdam since 1993. The theoretical premise of this study is that ideologies are typically, though not exclusively, expressed and reproduced in discourse and communication, including non-verbal semiotic messages, such as pictures, photographs and movies. Obviously, ideologies are also enacted in other forms of action and interaction, and their reproduction is often embedded in organizational and institutional contexts. Thus, racist ideologies may be expressed and reproduced in racist talk, comics or movies in the context of the mass media, but they may also be enacted in many forms of discrimination and institutionalized by racist parties within the context of the mass media or of Western parliamentary democracies. However, among the many forms of reproduction and interaction, discourse plays a prominent role as the preferential site for the explicit, verbal formulation and the persuasive communication of ideological propositions.", "title": "" }, { "docid": "7853936d58687b143bc135e6e60092ce", "text": "Multilabel learning has become a relevant learning paradigm in the past years due to the increasing number of fields where it can be applied and also to the emerging number of techniques that are being developed. This article presents an up-to-date tutorial about multilabel learning that introduces the paradigm and describes the main contributions developed. Evaluation measures, fields of application, trending topics, and resources are also presented.", "title": "" }, { "docid": "5750ba2f313e044925487401d3772ca5", "text": "PCB technology provides an alternative to conventional Wireless Power Transfer (WPT) coils made of wound copper wire. The most common wires for conventional coils are either a single wire or a multistranded cable conforming a litz wire structure. On the other hand, the implementation on printed board is suitable for medium-low power applications, presenting advantages as low cost, high repetibility and compact design. Moreover, for reduced losses, the litz wire structure can be adapted to a PCB implementation. In this paper, a mathematical description of the PCB litz structure is presented together with a method to automatically generate PCB layouts of custom coils. Following this method, several prototypes were built and tested. The experimental characterization of the samples shows that coils with PCB litz structure present reduced losses.", "title": "" }, { "docid": "9eabe9a867edbceee72bd20d483ad886", "text": "Inspired by recent advances of deep learning in instance segmentation and object tracking, we introduce the concept of convnet-based guidance applied to video object segmentation. Our model proceeds on a per-frame basis, guided by the output of the previous frame towards the object of interest in the next frame. We demonstrate that highly accurate object segmentation in videos can be enabled by using a convolutional neural network (convnet) trained with static images only. The key component of our approach is a combination of offline and online learning strategies, where the former produces a refined mask from the previous frame estimate and the latter allows to capture the appearance of the specific object instance. Our method can handle different types of input annotations such as bounding boxes and segments while leveraging an arbitrary amount of annotated frames. Therefore our system is suitable for diverse applications with different requirements in terms of accuracy and efficiency. In our extensive evaluation, we obtain competitive results on three different datasets, independently from the type of input annotation.", "title": "" }, { "docid": "ff8c0e46b0643564a9334bd15b16caeb", "text": "One of the challenges in large-scale information retrieval (IR) is to develop fine-grained and domainspecific methods to answer natural language questions. Despite the availability of numerous sources and datasets for answer retrieval, Question Answering (QA) remains a challenging problem due to the difficulty of the question understanding and answer extraction tasks. One of the promising tracks investigated in QA is to map new questions to formerly answered questions that are “similar”. In this paper, we propose a novel QA approach based on Recognizing Question Entailment (RQE) and we describe the QA system and resources that we built and evaluated on real medical questions. First, we compare machine learning and deep learning methods for RQE using different kinds of datasets, including textual inference, question similarity and entailment in both the open and clinical domains. Second, we combine IR models with the best RQE method to select entailed questions and rank the retrieved answers. To study the end-to-end QA approach, we built the MedQuAD collection of 47,457 question-answer pairs from trusted medical sources, that we introduce and share in the scope of this paper. Following the evaluation process used in TREC 2017 LiveQA, we find that our approach exceeds the best results of the medical task with a 29.8% increase over the best official score. The evaluation results also support the relevance of question entailment for QA and highlight the effectiveness of combining IR and RQE for future QA efforts. Our findings also show that relying on a restricted set of reliable answer sources can bring a substantial improvement in medical QA.", "title": "" }, { "docid": "52fca011caec44823513dbfe24389c15", "text": "Learning novel relations from relational databases is an important problem with many applications. Relational learning algorithms learn the definition of a new relation in terms of existing relations in the database. Nevertheless, the same database may be represented under different schemas for various reasons, such as data quality, efficiency and usability. The output of current relational learning algorithms tends to vary quite substantially over the choice of schema. This variation complicates their off-the-shelf application. We introduce and formalize the property of schema independence of relational learning algorithms, and study both the theoretical and empirical dependence of existing algorithms on the common class of (de) composition schema transformations. We show that current algorithms are not schema independent. We propose Castor, a relational learning algorithm that achieves schema independence by leveraging data dependencies.", "title": "" } ]
scidocsrr
39d0cf3b8a14d45ab3abdf72f558ee55
Social Network De-anonymization with Overlapping Communities: Analysis, Algorithm and Experiments
[ { "docid": "0bf5a87d971ff2dca4c8dfa176316663", "text": "A crucial privacy-driven issue nowadays is re-identifying anonymized social networks by mapping them to correlated cross-domain auxiliary networks. Prior works are typically based on modeling social networks as random graphs representing users and their relations, and subsequently quantify the quality of mappings through cost functions that are proposed without sufficient rationale. Also, it remains unknown how to algorithmically meet the demand of such quantifications, i.e., to find the minimizer of the cost functions. We address those concerns in a more realistic social network modeling parameterized by community structures that can be leveraged as side information for de-anonymization. By Maximum A Posteriori (MAP) estimation, our first contribution is new and well justified cost functions, which, when minimized, enjoy superiority to previous ones in finding the correct mapping with the highest probability. The feasibility of the cost functions is then for the first time algorithmically characterized. While proving the general multiplicative inapproximability, we are able to propose two algorithms, which, respectively, enjoy an -additive approximation and a conditional optimality in carrying out successful user re-identification. Our theoretical findings are empirically validated, with a notable dataset extracted from rare true cross-domain networks that reproduce genuine social network de-anonymization. Both theoretical and empirical observations also manifest the importance of community information in enhancing privacy inferencing.", "title": "" } ]
[ { "docid": "afe26c28b56a511452096bfc211aed97", "text": "System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams, and possibly Object Constraint Language (OCL) expressions across all these artifacts. Our goal here is to support the derivation of functional system test requirements, which will be transformed into test cases, test oracles, and test drivers once we have detailed design information. In this paper, we describe a methodology in a practical way and illustrate it with an example. In this context, we address testability and automation issues, as the ultimate goal is to fully support system testing activities with high-capability tools.", "title": "" }, { "docid": "59c83aa2f97662c168316f1a4525fd4d", "text": "Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.", "title": "" }, { "docid": "f577f970f841d8dee34e524ba661e727", "text": "The rapid growth in the amount of user-generated content (UGCs) online necessitates for social media companies to automatically extract knowledge structures (concepts) from user-generated images (UGIs) and user-generated videos (UGVs) to provide diverse multimedia-related services. For instance, recommending preference-aware multimedia content, the understanding of semantics and sentics from UGCs, and automatically computing tag relevance for UGIs are benefited from knowledge structures extracted from multiple modalities. Since contextual information captured by modern devices in conjunction with a media item greatly helps in its understanding, we leverage both multimedia content and contextual information (eg., spatial and temporal metadata) to address above-mentioned social media problems in our doctoral research. We present our approaches, results, and works in progress on these problems.", "title": "" }, { "docid": "2390d3d6c51c4a6857c517eb2c2cb3c0", "text": "It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade, leading to an array of approaches to business process variability modeling. In general, each of these approaches extends a conventional process modeling language with constructs to capture customizable process models. A customizable process model represents a family of process variants in a way that a model of each variant can be derived by adding or deleting fragments according to customization options or according to a domain model. This survey draws up a systematic inventory of approaches to customizable process modeling and provides a comparative evaluation with the aim of identifying common and differentiating modeling features, providing criteria for selecting among multiple approaches, and identifying gaps in the state of the art. The survey puts into evidence an abundance of customizable process-modeling languages, which contrasts with a relative scarcity of available tool support and empirical comparative evaluations.", "title": "" }, { "docid": "75a15ef2ce8dd6b4c58a36b9fd352d18", "text": "Business growth and technology advancements have resulted in growing amounts of enterprise data. To gain valuable business insight and competitive advantage, businesses demand the capability of performing real-time analytics on such data. This, however, involves expensive query operations that are very time consuming on traditional CPUs. Additionally, in traditional database management systems (DBMS), the CPU resources are dedicated to mission-critical transactional workloads. Offloading expensive analytics query operations to a co-processor can allow efficient execution of analytics workloads in parallel with transactional workloads.\n In this paper, we present a Field Programmable Gate Array (FPGA) based acceleration engine for database operations in analytics queries. The proposed solution provides a mechanism for a DBMS to seamlessly harness the FPGA compute power without requiring any changes in the application or the existing data layout. Using a software-programmed query control block, the accelerator can be tailored to execute different queries without reconfiguration. Our prototype is implemented in a PCIe-attached FPGA system and is integrated into a commercial DBMS platform. The results demonstrate up to 94% CPU savings on real customer data compared to the baseline software cost with up to an order of magnitude speedup in the offloaded computations and up to 6.2x improvement in end-to-end performance.", "title": "" }, { "docid": "a2a0ff72b88d766ab5eb087c14d88b03", "text": "Next-generation non-volatile memory (NVM) technologies, such as phase-change memory and memristors, can enable computer systems infrastructure to continue keeping up with the voracious appetite of data-centric applications for large, cheap, and fast storage. Persistent memory has emerged as a promising approach to accessing emerging byte-addressable non-volatile memory through processor load/store instructions. Due to lack of commercially available NVM, system software researchers have mainly relied on emulation to model persistent memory performance. However, existing emulation approaches are either too simplistic, or too slow to emulate large-scale workloads, or require special hardware. To fill this gap and encourage wider adoption of persistent memory, we developed a performance emulator for persistent memory, called Quartz. Quartz enables an efficient emulation of a wide range of NVM latencies and bandwidth characteristics for performance evaluation of emerging byte-addressable NVMs and their impact on applications performance (without modifying or instrumenting their source code) by leveraging features available in commodity hardware. Our emulator is implemented on three latest Intel Xeon-based processor architectures: Sandy Bridge, Ivy Bridge, and Haswell. To assist researchers and engineers in evaluating design decisions with emerging NVMs, we extend Quartz for emulating the application execution on future systems with two types of memory: fast, regular volatile DRAM and slower persistent memory. We evaluate the effectiveness of our approach by using a set of specially designed memory-intensive benchmarks and real applications. The accuracy of the proposed approach is validated by running these programs both on our emulation platform and a multisocket (NUMA) machine that can support a range of memory latencies. We show that Quartz can emulate a range of performance characteristics with low overhead and good accuracy (with emulation errors 0.2% - 9%).", "title": "" }, { "docid": "c2816721fa6ccb0d676f7fdce3b880d4", "text": "Due to the achievements in the Internet of Things (IoT) field, Smart Objects are often involved in business processes. However, the integration of IoT with Business Process Management (BPM) is far from mature: problems related to process compliance and Smart Objects configuration with respect to the process requirements have not been fully addressed yet; also, the interaction of Smart Objects with multiple business processes that belong to different stakeholders is still under investigation. My PhD thesis aims to fill this gap by extending the BPM lifecycle, with particular focus on the design and analysis phase, in order to explicitly support IoT and its requirements.", "title": "" }, { "docid": "313761d2cdb224253f87fe4b33977b85", "text": "In this paper we described an authorship attribution system for Bengali blog texts. We have presented a new Bengali blog corpus of 3000 passages written by three authors. Our study proposes a text classification system, based on lexical features such as character bigrams and trigrams, word n-grams (n = 1, 2, 3) and stop words, using four classifiers. We achieve best results (more than 99%) on the held-out dataset using Multi layered Perceptrons (MLP) amongst the four classifiers, which indicates MLP can produce very good results for big data sets and lexical n-gram based features can be the best features for any authorship attribution system.", "title": "" }, { "docid": "636cb349f6a8dcdde70ee39b663dbdbe", "text": "Estimation and modelling problems as they arise in many data analysis areas often turn out to be unstable and/or intractable by standard numerical methods. Such problems frequently occur in fitting of large data sets to a certain model and in predictive learning. Heuristics are general recommendations based on practical statistical evidence, in contrast to a fixed set of rules that cannot vary, although guarantee to give the correct answer. Although the use of these methods became more standard in several fields of sciences, their use for estimation and modelling in statistics appears to be still limited. This paper surveys a set of problem-solving strategies, guided by heuristic information, that are expected to be used more frequently. The use of recent advances in different fields of large-scale data analysis is promoted focusing on applications in medicine, biology and technology.", "title": "" }, { "docid": "187c696aeb78607327fd817dfa9446ba", "text": "OBJECTIVE\nThe integration of SNOMED CT into the Unified Medical Language System (UMLS) involved the alignment of two views of synonymy that were different because the two vocabulary systems have different intended purposes and editing principles. The UMLS is organized according to one view of synonymy, but its structure also represents all the individual views of synonymy present in its source vocabularies. Despite progress in knowledge-based automation of development and maintenance of vocabularies, manual curation is still the main method of determining synonymy. The aim of this study was to investigate the quality of human judgment of synonymy.\n\n\nDESIGN\nSixty pairs of potentially controversial SNOMED CT synonyms were reviewed by 11 domain vocabulary experts (six UMLS editors and five noneditors), and scores were assigned according to the degree of synonymy.\n\n\nMEASUREMENTS\nThe synonymy scores of each subject were compared to the gold standard (the overall mean synonymy score of all subjects) to assess accuracy. Agreement between UMLS editors and noneditors was measured by comparing the mean synonymy scores of editors to noneditors.\n\n\nRESULTS\nAverage accuracy was 71% for UMLS editors and 75% for noneditors (difference not statistically significant). Mean scores of editors and noneditors showed significant positive correlation (Spearman's rank correlation coefficient 0.654, two-tailed p < 0.01) with a concurrence rate of 75% and an interrater agreement kappa of 0.43.\n\n\nCONCLUSION\nThe accuracy in the judgment of synonymy was comparable for UMLS editors and nonediting domain experts. There was reasonable agreement between the two groups.", "title": "" }, { "docid": "a23949a678e49a7e1495d98aae3adef2", "text": "The continued increase in the usage of Small Scale Digital Devices (SSDDs) to browse the web has made mobile devices a rich potential for digital evidence. Issues may arise when suspects attempt to hide their browsing habits using applications like Orweb - which intends to anonymize network traffic as well as ensure that no browsing history is saved on the device. In this work, the researchers conducted experiments to examine if digital evidence could be reconstructed when the Orweb browser is used as a tool to hide web browsing activates on an Android smartphone. Examinations were performed on both a non-rooted and a rooted Samsung Galaxy S2 smartphone running Android 2.3.3. The results show that without rooting the device, no private web browsing traces through Orweb were found. However, after rooting the device, the researchers were able to locate Orweb browser history, and important corroborative digital evidence was found.", "title": "" }, { "docid": "176dfaa0457b06aee41014ad0f895c13", "text": "The generalized feedback shift register pseudorandom number algorithm has several advantages over all other pseudorandom number generators. These advantages are: (1) it produces multidimensional pseudorandom numbers; (2) it has an arbitrarily long period independent of the word size of the computer on which it is implemented; (3) it is faster than other pseudorandom number generators; (4) the “same” floating-point pseudorandom number sequence is obtained on any machine, that is, the high order mantissa bits of each pseudorandom number agree on all machines— examples are given for IBM 360, Sperry-Rand-Univac 1108, Control Data 6000, and Hewlett-Packard 2100 series computers; (5) it can be coded in compiler languages (it is portable); (6) the algorithm is easily implemented in microcode and has been programmed for an Interdata computer.", "title": "" }, { "docid": "99bd8339f260784fff3d0a94eb04f6f4", "text": "Reinforcement learning algorithms discover policies that maximize reward, but do not necessarily guarantee safety during learning or execution phases. We introduce a new approach to learn optimal policies while enforcing properties expressed in temporal logic. To this end, given the temporal logic specification that is to be obeyed by the learning system, we propose to synthesize a reactive system called a shield. The shield monitors the actions from the learner and corrects them only if the chosen action causes a violation of the specification. We discuss which requirements a shield must meet to preserve the convergence guarantees of the learner. Finally, we demonstrate the versatility of our approach on several challenging reinforcement learning scenarios.", "title": "" }, { "docid": "2ef92113a901df268261be56f5110cfa", "text": "This paper studies the problem of finding a priori shortest paths to guarantee a given likelihood of arriving on-time in a stochastic network. Such ‘‘reliable” paths help travelers better plan their trips to prepare for the risk of running late in the face of stochastic travel times. Optimal solutions to the problem can be obtained from local-reliable paths, which are a set of non-dominated paths under first-order stochastic dominance. We show that Bellman’s principle of optimality can be applied to construct local-reliable paths. Acyclicity of local-reliable paths is established and used for proving finite convergence of solution procedures. The connection between the a priori path problem and the corresponding adaptive routing problem is also revealed. A label-correcting algorithm is proposed and its complexity is analyzed. A pseudo-polynomial approximation is proposed based on extreme-dominance. An extension that allows travel time distribution functions to vary over time is also discussed. We show that the time-dependent problem is decomposable with respect to arrival times and therefore can be solved as easily as its static counterpart. Numerical results are provided using typical transportation networks. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0b4c57b93a0da45e6561a0a13a4e4005", "text": "Scientific article recommendation problem deals with recommending similar scientific articles given a query article. It can be categorized as a content based similarity system. Recent advancements in representation learning methods have proven to be effective in modeling distributed representations in different modalities like images, languages, speech, networks etc. The distributed representations obtained using such techniques in turn can be used to calculate similarities. In this paper, we address the problem of scientific paper recommendation through a novel method which aims to combine multimodal distributed representations, which in this case are: 1. distributed representations of paper’s content, and 2. distributed representation of the graph constructed from the bibliographic network. Through experiments we demonstrate that our method outperforms the state-of-the-art distributed representation methods in text and graph, by 29.6% and 20.4%, both in terms of precision and mean-average-precision respectively.", "title": "" }, { "docid": "461062a51b0c33fcbb0f47529f3a6fba", "text": "Release of ATP from astrocytes is required for Ca2+ wave propagation among astrocytes and for feedback modulation of synaptic functions. However, the mechanism of ATP release and the source of ATP in astrocytes are still not known. Here we show that incubation of astrocytes with FM dyes leads to selective labelling of lysosomes. Time-lapse confocal imaging of FM dye-labelled fluorescent puncta, together with extracellular quenching and total-internal-reflection fluorescence microscopy (TIRFM), demonstrated directly that extracellular ATP or glutamate induced partial exocytosis of lysosomes, whereas an ischaemic insult with potassium cyanide induced both partial and full exocytosis of these organelles. We found that lysosomes contain abundant ATP, which could be released in a stimulus-dependent manner. Selective lysis of lysosomes abolished both ATP release and Ca2+ wave propagation among astrocytes, implicating physiological and pathological functions of regulated lysosome exocytosis in these cells.", "title": "" }, { "docid": "8721382dd1674fac3194d015b9c64f94", "text": "fines excipients as “substances, other than the active drug substance of finished dosage form, which have been appropriately evaluated for safety and are included in a drug delivery system to either aid the processing of the drug delivery system during its manufacture; protect; support; enhance stability, bioavailability, or patient acceptability; assist in product identification; or enhance any other attributes of the overall safety and effectiveness of the drug delivery system during storage or use” (1). This definition implies that excipients serve a purpose in a formulation and contrasts with the old terminology, inactive excipients, which hints at the property of inertness. With a literal interpretation of this definition, an excipient can include diverse molecules or moieties such as replication incompetent viruses (adenoviral or retroviral vectors), bacterial protein components, monoclonal antibodies, bacteriophages, fusion proteins, and molecular chimera. For example, using gene-directed enzyme prodrug therapy, research indicated that chimera containing a transcriptional regulatory DNA sequence capable of being selectively activated in mammalian cells was linked to a sequence that encodes a -lactamase enzyme and delivered to target cells (2). The expressed enzyme in the targeted cells catalyzes the conversion of a subsequently administered prodrug to a toxic agent. A similar purpose is achieved by using an antibody conjugated to an enzyme followed by the administration of a noncytotoxic substance that is converted in vivo by the enzyme to its toxic form (3). In these examples, the chimera or the enzyme-linked antibody would qualify as excipients. Furthermore, many emerging delivery systems use a drug or gene covalently linked to the molecules, polymers, antibody, or chimera responsible for drug targeting, internalization, or transfection. Conventional wisdom dictates that such an entity be classified as the active substance or prodrug for regulatory purposes and be subject to one set of specifications for the entire molecule. The fact remains, however, that only a discrete part of this prodrug is responsible for the therapeutic effect, and a similar effect may be obtained by physically entrapping the drug as opposed to covalent conjugation. The situation is further complicated when fusion proteins are used as a combination of drug and delivery system or when the excipients themselves", "title": "" }, { "docid": "7363b433f17e1f3dfecc805b58a8706b", "text": "Mobile Edge Computing (MEC) consists of deploying computing resources (CPU, storage) at the edge of mobile networks; typically near or with eNodeBs. Besides easing the deployment of applications and services requiring low access to the remote server, such as Virtual Reality and Vehicular IoT, MEC will enable the development of context-aware and context-optimized applications, thanks to the Radio API (e.g. information on user channel quality) exposed by eNodeBs. Although ETSI is defining the architecture specifications, solutions to integrate MEC to the current 3GPP architecture are still open. In this paper, we fill this gap by proposing and implementing a Software Defined Networking (SDN)-based MEC framework, compliant with both ETSI and 3GPP architectures. It provides the required data-plane flexibility and programmability, which can on-the-fly improve the latency as a function of the network deployment and conditions. To illustrate the benefit of using SDN concept for the MEC framework, we present the details of software architecture as well as performance evaluations.", "title": "" }, { "docid": "93dd0ad4eb100d4124452e2f6626371d", "text": "The role of background music in audience responses to commercials (and other marketing elements) has received increasing attention in recent years. This article extends the discussion of music’s influence in two ways: (1) by using music theory to analyze and investigate the effects of music’s structural profiles on consumers’ moods and emotions and (2) by examining the relationship between music’s evoked moods that are congruent versus incongruent with the purchase occasion and the resulting effect on purchase intentions. The study reported provides empirical support for the notion that when music is used to evoke emotions congruent with the symbolic meaning of product purchase, the likelihood of purchasing is enhanced. D 2003 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
9dc2ecee36716e1675e10e3cf1d6c42c
Collaboratively We Share , But Differently
[ { "docid": "d593c18bf87daa906f83d5ff718bdfd0", "text": "Information and communications technologies (ICTs) have enabled the rise of so-called “Collaborative Consumption” (CC): the peer-to-peer-based activity of obtaining, giving, or sharing the access to goods and services, coordinated through community-based online services. CC has been expected to alleviate societal problems such as hyper-consumption, pollution, and poverty by lowering the cost of economic coordination within communities. However, beyond anecdotal evidence, there is a dearth of understanding why people participate in CC. Therefore, in this article we investigate people’s motivations to participate in CC. The study employs survey data (N = 168) gathered from people registered onto a CC site. The results show that participation in CC is motivated by many factors such as its sustainability, enjoyment of the activity as well as economic gains. An interesting detail in the result is that sustainability is not directly associated with participation unless it is at the same time also associated with positive attitudes towards CC. This suggests that sustainability might only be an important factor for those people for whom ecological consumption is important. Furthermore, the results suggest that in CC an attitudebehavior gap might exist; people perceive the activity positively and say good things about it, but this good attitude does not necessary translate into action. Introduction", "title": "" }, { "docid": "a8e32745fb30f940bf8dd5aec22cc42a", "text": "Purpose – The purpose of this paper is to review what we know – and don’t know – about Generation Y’s use of social media and to assess the implications for individuals, firms and society. Design/methodology/approach – The paper distinguishes Generation Y from other cohorts in terms of systematic differences in values, preferences and behavior that are stable over time (as opposed to maturational or other differences). It describes their social media use and highlights evidence of intra-generational variance arising from environmental factors (including economic, cultural, technological and political/legal factors) and individual factors. Individual factors include stable factors (including socio-economic status, age and lifecycle stage) and dynamic, endogenous factors (including goals, emotions, and social norms).The paper discusses how Generation Y’s use of social media influences individuals, firms and society. It develops managerial implications and a research agenda. Findings – Prior research on the social media use of Generation Y raises more questions than it answers. It: focuses primarily on the USA and/or (at most) one other country, ignoring other regions with large and fast-growing Generation Y populations where social-media use and its determinants may differ significantly; tends to study students whose behaviors may change over their life cycle stages; relies on self-reports by different age groups to infer Generation Y’s social media use; and does not examine the drivers and outcomes of social-media use. This paper’s conceptual framework yields a detailed set of research questions. Originality/value – This paper provides a conceptual framework for considering the antecedents and consequences of Generation Y’s social media usage. It identifies unanswered questions about Generation Y’s use of social media, as well as practical insights for managers.", "title": "" } ]
[ { "docid": "91c0bd1c3faabc260277c407b7c6af59", "text": "In this paper, we consider the Direct Perception approach for autonomous driving. Previous efforts in this field focused more on feature extraction of the road markings and other vehicles in the scene rather than on the autonomous driving algorithm and its performance under realistic assumptions. Our main contribution in this paper is introducing a new, more robust, and more realistic Direct Perception framework and corresponding algorithm for autonomous driving. First, we compare the top 3 Convolutional Neural Networks (CNN) models in the feature extraction competitions and test their performance for autonomous driving. The experimental results showed that GoogLeNet performs the best in this application. Subsequently, we propose a deep learning based algorithm for autonomous driving, and we refer to our algorithm as GoogLenet for Autonomous Driving (GLAD). Unlike previous efforts, GLAD makes no unrealistic assumptions about the autonomous vehicle or its surroundings, and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts. Our simulation results show that the proposed GLAD algorithm outperforms previous Direct Perception algorithms both on empty roads and while driving with other surrounding vehicles.", "title": "" }, { "docid": "8ec1bc66be78beba90418db48eee1222", "text": "An evolutionary perspective offers novel insights into some major obstacles to achieving happiness. Impediments include large discrepancies between modern and ancestral environments, the existence of evolved mechanisms \"designed\" to produce subjective distress, and the fact that evolution by selection has produced competitive mechanisms that function to benefit one person at the expense of others. On the positive side, people also possess evolved mechanisms that produce deep sources of happiness: those for mating bonds, deep friendship, close kinship, and cooperative coalitions. Understanding these psychological mechanisms--the selective processes that designed them, their evolved functions, and the contexts governing their activation--offers the best hope for holding some evolved mechanisms in check and selectively activating others to produce an overall increment in human happiness.", "title": "" }, { "docid": "f79ce505400f6a7fe087d9466c026c22", "text": "A chicken manure management process was carried out through co-conversion of Hermetia illucens L. larvae (BSFL) with functional bacteria for producing larvae as feed stuff and organic fertilizer. Thirteen days co-conversion of 1000 kg of chicken manure inoculated with one million 6-day-old BSFL and 109 CFU Bacillus subtilis BSF-CL produced aging larvae, followed by eleven days of aerobic fermentation inoculated with the decomposing agent to maturity. 93.2 kg of fresh larvae were harvested from the B. subtilis BSF-CL-inoculated group, while the control group only harvested 80.4 kg of fresh larvae. Chicken manure reduction rate of the B. subtilis BSF-CL-inoculated group was 40.5%, while chicken manure reduction rate of the control group was 35.8%. The weight of BSFL increased by 15.9%, BSFL conversion rate increased by 12.7%, and chicken manure reduction rate increased by 13.4% compared to the control (no B. subtilis BSF-CL). The residue inoculated with decomposing agent had higher maturity (germination index >92%), compared with the no decomposing agent group (germination index ∼86%). The activity patterns of different enzymes further indicated that its production was more mature and stable than that of the no decomposing agent group. Physical and chemical production parameters showed that the residue inoculated with the decomposing agent was more suitable for organic fertilizer than the no decomposing agent group. Both, the co-conversion of chicken manure by BSFL with its synergistic bacteria and the aerobic fermentation with the decomposing agent required only 24 days. The results demonstrate that co-conversion process could shorten the processing time of chicken manure compared to traditional compost process. Gut bacteria could enhance manure conversion and manure reduction. We established efficient manure co-conversion process by black soldier fly and bacteria and harvest high value-added larvae mass and biofertilizer.", "title": "" }, { "docid": "725248d21a4c1adcc5c26203990170b8", "text": "Digital pathology has advanced substantially over the last decade however tumor localization continues to be a challenging problem due to highly complex patterns and textures in the underlying tissue bed. The use of convolutional neural networks (CNNs) to analyze such complex images has been well adopted in digital pathology. However in recent years, the architecture of CNNs have altered with the introduction of inception modules which have shown great promise for classification tasks. In this paper, we propose a modified “transition” module which learns global average pooling layers from filters of varying sizes to encourage class-specific filters at multiple spatial resolutions. We demonstrate the performance of the transition module in AlexNet and ZFNet, for classifying breast tumors in two independent datasets of scanned histology sections, of which the transition module was superior.", "title": "" }, { "docid": "b25cfcd6ceefffe3039bb5a6a53e216c", "text": "With the increasing applications in the domains of ubiquitous and context-aware computing, Internet of Things (IoT) are gaining importance. In IoTs, literally anything can be part of it, whether it is sensor nodes or dumb objects, so very diverse types of services can be produced. In this regard, resource management, service creation, service management, service discovery, data storage, and power management would require much better infrastructure and sophisticated mechanism. The amount of data IoTs are going to generate would not be possible for standalone power-constrained IoTs to handle. Cloud computing comes into play here. Integration of IoTs with cloud computing, termed as Cloud of Things (CoT) can help achieve the goals of envisioned IoT and future Internet. This IoT-Cloud computing integration is not straight-forward. It involves many challenges. One of those challenges is data trimming. Because unnecessary communication not only burdens the core network, but also the data center in the cloud. For this purpose, data can be preprocessed and trimmed before sending to the cloud. This can be done through a Smart Gateway, accompanied with a Smart Network or Fog Computing. In this paper, we have discussed this concept in detail and present the architecture of Smart Gateway with Fog Computing. We have tested this concept on the basis of Upload Delay, Synchronization Delay, Jitter, Bulk-data Upload Delay, and Bulk-data Synchronization Delay.", "title": "" }, { "docid": "1c0fbcf65e0a2908811ae70b206df298", "text": "RF Feedback The basis of this technique is similar to its audio-frequency counterpart. A portion of the RF-output signal from the amplifier is fed back to, and subtracted from, the RF-input signal without detection or downconversion. Considerable care must be taken when using feedback at RF as the delays involved must be small to ensure stability. In addition, the loss of gain at RF is generally a more significant sacrifice than it is at audio frequencies. For these reasons, the use of RF feedback in discrete circuits is usually restricted to HF and lower VHF frequencies [99]. It can be applied within MMIC devices, however, well into the microwave region. In an active RF feedback system, the voltage divider of a conventional passive-feedback system is replaced by an active (amplifier) stage. The gain in the feedback path reduces the power dissipated in the feedback components. While such systems demonstrate IMD reduction [105], they tend to work best at a specific signal level.", "title": "" }, { "docid": "a008e9f817c6c4658c9c739d0d7fb6a4", "text": "BI (Business Intelligence) is an important discipline for companies and the challenges it faces are strategic. A central concept in BI is the data warehouse, which is a set of consolidated data from heterogeneous sources (usually databases in 3NF). To model the data warehouse, the Inmon and Kimball approaches are the most used. Both solutions monopolize the BI market However, a third modeling approach called “Data Vault” of its creator Linstedt, is gaining ground from year to year. It allows building a data warehouse of raw (unprocessed) data from heterogeneous sources. The purpose of this paper is to present a comparative study of the three precedent approaches. First, we study each approach separately and then we draw a comparison between them. Finally, we include recommendations for selecting the best approach before concluding this paper.", "title": "" }, { "docid": "78db8b57c3221378847092e5283ad754", "text": "This paper analyzes correlations and causalities between Bitcoin market indicators and Twitter posts containing emotional signals on Bitcoin. Within a timeframe of 104 days (November 23 2013 March 7 2014), about 160,000 Twitter posts containing ”bitcoin” and a positive, negative or uncertainty related term were collected and further analyzed. For instance, the terms ”happy”, ”love”, ”fun”, ”good”, ”bad”, ”sad” and ”unhappy” represent positive and negative emotional signals, while ”hope”, ”fear” and ”worry” are considered as indicators of uncertainty. The static (daily) Pearson correlation results show a significant positive correlation between emotional tweets and the close price, trading volume and intraday price spread of Bitcoin. However, a dynamic Granger causality analysis does not confirm a causal effect of emotional Tweets on Bitcoin market values. To the contrary, the analyzed data shows that a higher Bitcoin trading volume Granger causes more signals of uncertainty within a 24 to 72hour timeframe. This result leads to the interpretation that emotional sentiments rather mirror the market than that they make it predictable. Finally, the conclusion of this paper is that the microblogging platform Twitter is Bitcoins virtual trading floor, emotionally reflecting its trading dynamics.2", "title": "" }, { "docid": "eba769c6246b44d8ed7e5f08aac17731", "text": "One hundred men, living in three villages in a remote region of the Eastern Highlands of Papua New Guinea were asked to judge the attractiveness of photographs of women who had undergone micrograft surgery to reduce their waist-to-hip ratios (WHRs). Micrograft surgery involves harvesting adipose tissue from the waist and reshaping the buttocks to produce a low WHR and an \"hourglass\" female figure. Men consistently chose postoperative photographs as being more attractive than preoperative photographs of the same women. Some women gained, and some lost weight, postoperatively, with resultant changes in body mass index (BMI). However, changes in BMI were not related to men's judgments of attractiveness. These results show that the hourglass female figure is rated as attractive by men living in a remote, indigenous community, and that when controlling for BMI, WHR plays a crucial role in their attractiveness judgments.", "title": "" }, { "docid": "fd552ab0c10bcbd35a18dbb1b3920d37", "text": "We propose the hypothesis that word etymology is useful for NLP applications as a bridge between languages. We support this hypothesis with experiments in crosslanguage (English-Italian) document categorization. In a straightforward bag-ofwords experimental set-up we add etymological ancestors of the words in the documents, and investigate the performance of a model built on English data, on Italian test data (and viceversa). The results show not only statistically significant, but a large improvement – a jump of almost 40 points in F1-score – over the raw (vanilla bag-ofwords) representation.", "title": "" }, { "docid": "6f176e780d94a8fa8c5b1d6d364c4363", "text": "Current uses of smartwatches are focused solely around the wearer's content, viewed by the wearer alone. When worn on a wrist, however, watches are often visible to many other people, making it easy to quickly glance at their displays. We explore the possibility of extending smartwatch interactions to turn personal wearables into more public displays. We begin opening up this area by investigating fundamental aspects of this interaction form, such as the social acceptability and noticeability of looking at someone else's watch, as well as the likelihood of a watch face being visible to others. We then sketch out interaction dimensions as a design space, evaluating each aspect via a web-based study and a deployment of three potential designs. We conclude with a discussion of the findings, implications of the approach and ways in which designers in this space can approach public wrist-worn wearables.", "title": "" }, { "docid": "587f1510411636090bc192b1b9219b58", "text": "Creativity can be considered one of the key competencies for the twenty-first century. It provides us with the capacity to deal with the opportunities and challenges that are part of our complex and fast-changing world. The question as to what facilitates creative cognition-the ability to come up with creative ideas, problem solutions and products-is as old as the human sciences, and various means to enhance creative cognition have been studied. Despite earlier scientific studies demonstrating a beneficial effect of music on cognition, the effect of music listening on creative cognition has remained largely unexplored. The current study experimentally tests whether listening to specific types of music (four classical music excerpts systematically varying on valance and arousal), as compared to a silence control condition, facilitates divergent and convergent creativity. Creativity was higher for participants who listened to 'happy music' (i.e., classical music high on arousal and positive mood) while performing the divergent creativity task, than for participants who performed the task in silence. No effect of music was found for convergent creativity. In addition to the scientific contribution, the current findings may have important practical implications. Music listening can be easily integrated into daily life and may provide an innovative means to facilitate creative cognition in an efficient way in various scientific, educational and organizational settings when creative thinking is needed.", "title": "" }, { "docid": "496501d679734b90dd9fd881389fcc34", "text": "Learning is often identified with the acquisition, encoding, or construction of new knowledge, while retrieval is often considered only a means of assessing knowledge, not a process that contributes to learning. Here, we make the case that retrieval is the key process for understanding and for promoting learning. We provide an overview of recent research showing that active retrieval enhances learning, and we highlight ways researchers have sought to extend research on active retrieval to meaningful learning—the learning of complex educational materials as assessed on measures of inference making and knowledge application. However, many students lack metacognitive awareness of the benefits of practicing active retrieval. We describe two approaches to addressing this problem: classroom quizzing and a computer-based learning program that guides students to practice retrieval. Retrieval processes must be considered in any analysis of learning, and incorporating retrieval into educational activities represents a powerful way to enhance learning.", "title": "" }, { "docid": "6e07085f81dc4f6892e0f2aba7a8dcdd", "text": "With the rapid growth in the number of spiraling network users and the increase in the use of communication technologies, the multi-server environment is the most common environment for widely deployed applications. Reddy et al. recently showed that Lu et al.'s biometric-based authentication scheme for multi-server environment was insecure, and presented a new authentication and key-agreement scheme for the multi-server. Reddy et al. continued to assert that their scheme was more secure and practical. After a careful analysis, however, their scheme still has vulnerabilities to well-known attacks. In this paper, the vulnerabilities of Reddy et al.'s scheme such as the privileged insider and user impersonation attacks are demonstrated. A proposal is then presented of a new biometric-based user authentication scheme for a key agreement and multi-server environment. Lastly, the authors demonstrate that the proposed scheme is more secure using widely accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool, and that it serves to satisfy all of the required security properties.", "title": "" }, { "docid": "7ed58e8ec5858bdcb5440123aea57bb1", "text": "The demand for cloud computing is increasing because of the popularity of digital devices and the wide use of the Internet. Among cloud computing services, most consumers use cloud storage services that provide mass storage. This is because these services give them various additional functions as well as storage. It is easy to access cloud storage services using smartphones. With increasing utilization, it is possible for malicious users to abuse cloud storage services. Therefore, a study on digital forensic investigation of cloud storage services is necessary. This paper proposes new procedure for investigating and analyzing the artifacts of all accessible devices, such as Windows, Mac, iPhone, and Android smartphone.", "title": "" }, { "docid": "fd0c32b1b4e52f397d0adee5de7e381c", "text": "Context. Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective. In this work, we review 156 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, braincomputer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods. Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to 1) the data, 2) the preprocessing methodology, 3) the DL design choices, 4) the results, and 5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results. Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About 40% of the studies used convolutional neural networks (CNNs), while 14% used recurrent neural networks (RNNs), most often with a total of 3 to 10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was 5.4% across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. ∗The first two authors contributed equally to this work. Significance. To help the community progress and share work more effectively, we provide a list of recommendations for future studies. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly.", "title": "" }, { "docid": "4406b7c9d53b895355fa82b11da21293", "text": "In today's scenario, World Wide Web (WWW) is flooded with huge amount of information. Due to growing popularity of the internet, finding the meaningful information among billions of information resources on the WWW is a challenging task. The information retrieval (IR) provides documents to the end users which satisfy their need of information. Search engine is used to extract valuable information from the internet. Web crawler is the principal part of search engine; it is an automatic script or program which can browse the WWW in automatic manner. This process is known as web crawling. In this paper, review on strategies of information retrieval in web crawling has been presented that are classifying into four categories viz: focused, distributed, incremental and hidden web crawlers. Finally, on the basis of user customized parameters the comparative analysis of various IR strategies has been performed.", "title": "" }, { "docid": "27d65b98233322f099fccc61838ce4ae", "text": "This article defines universal design for learning (UDL) and presents examples of how universally designed technology hardware and software applications benefit students with disabilities who are majoring in science, technology, engineering, or mathematics (STEM) majors. When digital technologies are developed without incorporating accessible design features, persons with disabilities cannot access required information to interact with the information society. However, when accessible technology and instruction are provided using UDL principles, research indicates that many students benefit with increased achievement. Learning through universally designed and accessible technology is essential for students with disabilities who, without access, would not gain the skills needed to complete their degrees and access employment and a life of self-sufficiency. UDL strategies enhance learning for all students, including students with disabilities who are majoring in STEM, which are among the most rigorous academic disciplines, but also among the most financially rewarding careers.", "title": "" }, { "docid": "7bd5a1ce9db81d50f1802db0a6623e92", "text": "Goal-Oriented (GO) Dialogue Systems, colloquially known as goal oriented chatbots, help users achieve a predefined goal (e.g. book a movie ticket) within a closed domain. A first step is to understand the user’s goal by using natural language understanding techniques. Once the goal is known, the bot must manage a dialogue to achieve that goal, which is conducted with respect to a learnt policy. The success of the dialogue system depends on the quality of the policy, which is in turn reliant on the availability of high-quality training data for the policy learning method, for instance Deep Reinforcement Learning. Due to the domain specificity, the amount of available data is typically too low to allow the training of good dialogue policies. In this master thesis we introduce a transfer learning method to mitigate the effects of the low in-domain data availability. Our transfer learning based approach improves the bot’s success rate by 20% in relative terms for distant domains and we more than double it for close domains, compared to the model without transfer learning. Moreover, the transfer learning chatbots learn the policy up to 5 to 10 times faster. Finally, as the transfer learning approach is complementary to additional processing such as warm-starting, we show that their joint application gives the best outcomes.", "title": "" }, { "docid": "257d1de3b45533ca49e0a78ba55c841e", "text": "Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as “algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human.” This “human-in-the-loop” can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.", "title": "" } ]
scidocsrr
6a996cc4c5ed7ee8679067b0f5995b0c
Door Knob Hand Recognition System
[ { "docid": "4f58172c8101b67b9cd544b25d09f2e2", "text": "For years, researchers in face recognition area have been representing and recognizing faces based on subspace discriminant analysis or statistical learning. Nevertheless, these approaches are always suffering from the generalizability problem. This paper proposes a novel non-statistics based face representation approach, local Gabor binary pattern histogram sequence (LGBPHS), in which training procedure is unnecessary to construct the face model, so that the generalizability problem is naturally avoided. In this approach, a face image is modeled as a \"histogram sequence\" by concatenating the histograms of all the local regions of all the local Gabor magnitude binary pattern maps. For recognition, histogram intersection is used to measure the similarity of different LGBPHSs and the nearest neighborhood is exploited for final classification. Additionally, we have further proposed to assign different weights for each histogram piece when measuring two LGBPHSes. Our experimental results on AR and FERET face database show the validity of the proposed approach especially for partially occluded face images, and more impressively, we have achieved the best result on FERET face database.", "title": "" } ]
[ { "docid": "a7be4f9177e6790756b7ede4a2d9ca79", "text": "Metabolomics, or the comprehensive profiling of small molecule metabolites in cells, tissues, or whole organisms, has undergone a rapid technological evolution in the past two decades. These advances have led to the application of metabolomics to defining predictive biomarkers for incident cardiometabolic diseases and, increasingly, as a blueprint for understanding those diseases' pathophysiologic mechanisms. Progress in this area and challenges for the future are reviewed here.", "title": "" }, { "docid": "c724fdcf7f58121ff6ad886df68e2725", "text": "The Internet of Things (IoT) is an emerging paradigm where smart objects are seamlessly connected to the overall Internet and can potentially cooperate to achieve common objectives such as supporting innovative home automation services. With reference to such a scenario, this paper presents an Intrusion Detection System (IDS) framework for IoT empowered by IPv6 over low-power personal area network (6LoWPAN) devices. In fact, 6LoWPAN is an interesting protocol supporting the realization of IoT in a resource constrained environment. 6LoWPAN devices are vulnerable to attacks inherited from both the wireless sensor networks and the Internet protocols. The proposed IDS framework which includes a monitoring system and a detection engine has been integrated into the network framework developed within the EU FP7 project `ebbits'. A penetration testing (PenTest) system had been used to evaluate the performance of the implemented IDS framework. Preliminary tests revealed that the proposed framework represents a promising solution for ensuring better security in 6LoWPANs.", "title": "" }, { "docid": "24a3924f15cb058668e8bcb7ba53ee66", "text": "This paper presents a latest survey of different technologies used in medical image segmentation using Fuzzy C Means (FCM).The conventional fuzzy c-means algorithm is an efficient clustering algorithm that is used in medical image segmentation. To update the study of image segmentation the survey has performed. The techniques used for this survey are Brain Tumor Detection Using Segmentation Based on Hierarchical Self Organizing Map, Robust Image Segmentation in Low Depth Of Field Images, Fuzzy C-Means Technique with Histogram Based Centroid Initialization for Brain Tissue Segmentation in MRI of Head Scans.", "title": "" }, { "docid": "f48712851095fa3b33898c38ebcfaa95", "text": "Most existing image-based crop disease recognition algorithms rely on extracting various kinds of features from leaf images of diseased plants. They have a common limitation as the features selected for discriminating leaf images are usually treated as equally important in the classification process. We propose a novel cucumber disease recognition approach which consists of three pipelined procedures: segmenting diseased leaf images by K-means clustering, extracting shape and color features from lesion information, and classifying diseased leaf images using sparse representation (SR). A major advantage of this approach is that the classification in the SR space is able to effectively reduce the computation cost and improve the recognition performance. We perform a comparison with four other feature extraction based methods using a leaf image dataset on cucumber diseases. The proposed approach is shown to be effective in recognizing seven major cucumber diseases with an overall recognition rate of 85.7%, higher than those of the other methods. 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b4d4a42e261a5272a7865065b74ff91b", "text": "The acquisition of Magnetic Resonance Imaging (MRI) is inherently slow. Inspired by recent advances in deep learning, we propose a framework for reconstructing MR images from undersampled data using a deep cascade of convolutional neural networks to accelerate the data acquisition process. We show that for Cartesian undersampling of 2D cardiac MR images, the proposed method outperforms the state-of-the-art compressed sensing approaches, such as dictionary learning-based MRI (DLMRI) reconstruction, in terms of reconstruction error, perceptual quality and reconstruction speed for both 3-fold and 6-fold undersampling. Compared to DLMRI, the error produced by the method proposed is approximately twice as small, allowing to preserve anatomical structures more faithfully. Using our method, each image can be reconstructed in 23ms, which is fast enough to enable real-time applications.", "title": "" }, { "docid": "a8d96305af1a6371ae616e82d246ce5b", "text": "The past year alone has seen unprecedented leaps in the area of learning-based image translation, namely Cycle-GAN, by Zhu et al. But experiments so far have been tailored to merely two domains at a time, and scaling them to more would require an quadratic number of models to be trained. And with two-domain models taking days to train on current hardware, the number of domains quickly becomes limited by the time and resources required to process them. In this paper, we propose a multi-component image translation model and training scheme which scales linearly - both in resource consumption and time required - with the number of domains. We demonstrate its capabilities on a dataset of paintings by 14 different artists and on images of the four different seasons in the Alps. Note that 14 data groups would need (14 choose 2) = 91 different CycleGAN models: a total of 182 generator/discriminator pairs; whereas our model requires only 14 generator/discriminator pairs.", "title": "" }, { "docid": "6ccfe86f2a07dc01f87907855f6cb337", "text": "H istorically, retention of distance learners has been problematic with dropout rates disproportionably high compared to traditional course settings (Richards & Ridley, 1997; Wetzel, Radtke, & Stern, 1994). Dropout rates of 30 to 50% have been common (Moore & Kearsley, 1996). Students may experience feelings of isolation in distance courses compared to prior faceto-face educational experiences (Shaw & Polovina, 1999). If the distance courses feature limited contact with instructors and fellow students, the result of this isolation can be unfinished courses or degrees (Keegan, 1990). Student satisfaction in traditional learning environments has been overlooked in the past (Astin, 1993; DeBourgh, 1999; Navarro & Shoemaker, 2000). Student satisfaction has also not been given the proper attention in distance learning environments (Biner, Dean, & Mellinger, 1994). Richards and Ridley (1997) suggested further research is necessary to study factors affecting student enrollment and satisfaction. Prior studies in classroom-based courses have shown there is a high correlation between student satisfaction and retention (Astin, 1993; Edwards & Waters, 1982). This high correlation has also been found in studies in which distance learners were the target population (Bailey, Bauman, & Lata, 1998). The purpose of this study was to identify factors influencing student satisfaction in online courses, and to create and validate an instrument to measure student satisfaction in online courses.", "title": "" }, { "docid": "92ac3bfdcf5e554152c4ce2e26b77315", "text": "How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.", "title": "" }, { "docid": "3779a8241d4bc109e863fe23bdb7c1fd", "text": "Purpose – The purpose of the paper is to contribute to the knowledge of how relationship value, trust, commitment, satisfaction and loyalty intentions are defined and relate to each other. It explores these relationships in the business-to-business (B2B) context by analysing manufacturing companies regarding to their main supplier. Design/methodology/approach – After the literature review and several in-depth interviews, a method of empirical analysis consisting of quantitative intervention with an ad hoc survey using a structured questionnaire has been developed. Structural equations modeling is used to contrast the hypotheses on the links between the constructs analysed. Findings – Confirmatory factor analysis provided satisfactory results. With regard to the direct effects of the relationship value, the three relationships being considered were verified: relationship value has a positive influence on trust, commitment and satisfaction towards the supplier. Also, and as already contrasted in previous studies, trust has a direct, positive effect on commitment. In addition, and regarding to loyalty antecedents, data did not confirm that greater trust would increase loyalty but commitment did, leading to the conclusion that the effect of trust on loyalty is only indirect through the effect it has on commitment. Loyalty was also positively affected by satisfaction with the supplier. Research limitations/implications – Limitations of this paper open lines for future research. First, it considers that future research could include other variables affecting long-term relationships. Second, longitudinal studies could serve to enrich the results and illustrate the complexity of the direction in the links among the variables and take into account dynamics. Third, a customer’s perspective in the perception of value has been adopted in this paper. Finally, relationship value has been operationalised as reflective and a formative approach could be adopted. Implications for managers are in line with the detected importance of satisfaction and commitment as key factors because of their impact on intention to continue and expand business with the supplier. Moreover, manufacturers should recognize the role of assessing and building relationship value with their partners as it has an impact – direct or indirect – on intentions to stay in the relationship. Originality/value – In comparison, far less research has been done in the area of relationship value. This paper emphasizes the role of perceived relationship value in relationship B2B marketing studies providing a model and contributing to relationship management.", "title": "" }, { "docid": "21884cc698736f13736dcc889b8057a3", "text": "Although deep convolutional neural networks(CNNs) have achieved remarkable results on object detection and segmentation, preand post-processing steps such as region proposals and non-maximum suppression(NMS), have been required. These steps result in high computational complexity and sensitivity to hyperparameters, e.g. thresholds for NMS. In this work, we propose a novel end-to-end trainable deep neural network architecture, which consists of convolutional and recurrent layers, that generates the correct number of object instances and their bounding boxes (or segmentation masks) given an image, using only a single network evaluation without any preor post-processing steps. We have tested on detecting digits in multi-digit images synthesized using MNIST, automatically segmenting digits in these images, and detecting cars in the KITTI benchmark dataset. The proposed approach outperforms a strong CNN baseline on the synthesized digits datasets and shows promising results on KITTI car detection.", "title": "" }, { "docid": "e4ca7c16acd9b71a5ae7f1ee29101782", "text": "Recently, distributed generators and sensitive loads have been widely used. They enable a solid-state circuit breaker (SSCB), which is an imperative device to get acceptable power quality of ac power grid systems. The existing ac SSCB composed of a silicon-controlled rectifier requires some auxiliary mechanical devices to achieve the reclosing operation before fault recovery. However, the new ac SSCB can achieve a quick breaking operation and then be reclosed with no auxiliary mechanical devices or complex control even under sustained short-circuit fault because the commutation capacitors are charged naturally without any complex control of main thyristors and auxiliary ones. The performance features of the proposed ac SSCB are verified through the experiment results of the short-circuit faults.", "title": "" }, { "docid": "7c106fc6fc05ec2d35b89a1dec8e2ca2", "text": "OBJECTIVE\nCurrent estimates of the prevalence of depression during pregnancy vary widely. A more precise estimate is required to identify the level of disease burden and develop strategies for managing depressive disorders. The objective of this study was to estimate the prevalence of depression during pregnancy by trimester, as detected by validated screening instruments (ie, Beck Depression Inventory, Edinburgh Postnatal Depression Score) and structured interviews, and to compare the rates among instruments.\n\n\nDATA SOURCES\nObservational studies and surveys were searched in MEDLINE from 1966, CINAHL from 1982, EMBASE from 1980, and HealthSTAR from 1975.\n\n\nMETHODS OF STUDY SELECTION\nA validated study selection/data extraction form detailed acceptance criteria. Numbers and percentages of depressed patients, by weeks of gestation or trimester, were reported.\n\n\nTABULATION, INTEGRATION, AND RESULTS\nTwo reviewers independently extracted data; a third party resolved disagreement. Two raters assessed quality by using a 12-point checklist. A random effects meta-analytic model produced point estimates and 95% confidence intervals (CIs). Heterogeneity was examined with the chi(2) test (no systematic bias detected). Funnel plots and Begg-Mazumdar test were used to assess publication bias (none found). Of 714 articles identified, 21 (19,284 patients) met the study criteria. Quality scores averaged 62%. Prevalence rates (95% CIs) were 7.4% (2.2, 12.6), 12.8% (10.7, 14.8), and 12.0% (7.4, 16.7) for the first, second, and third trimesters, respectively. Structured interviews found lower rates than the Beck Depression Inventory but not the Edinburgh Postnatal Depression Scale.\n\n\nCONCLUSION\nRates of depression, especially during the second and third trimesters of pregnancy, are substantial. Clinical and economic studies to estimate maternal and fetal consequences are needed.", "title": "" }, { "docid": "14d6f8fe1888dd7e11efd3b57c8bc1e5", "text": "Model-based reinforcement learning approaches carry the promise of being data efficient. However, due to challenges in learning dynamics models that sufficiently match the real-world dynamics, they struggle to achieve the same asymptotic performance as model-free methods. We propose Model-Based MetaPolicy-Optimization (MB-MPO), an approach that foregoes the strong reliance on accurate learned dynamics models. Using an ensemble of learned dynamic models, MB-MPO meta-learns a policy that can quickly adapt to any model in the ensemble with one policy gradient step. This steers the meta-policy towards internalizing consistent dynamics predictions among the ensemble while shifting the burden of behaving optimally w.r.t. the model discrepancies towards the adaptation step. Our experiments show that MB-MPO is more robust to model imperfections than previous model-based approaches. Finally, we demonstrate that our approach is able to match the asymptotic performance of model-free methods while requiring significantly less experience.", "title": "" }, { "docid": "d4f806a58d4cdc59cae675a765d4c6bc", "text": "Our study examines whether ownership structure and boardroom characteristics have an effect on corporate financial fraud in China. The data come from the enforcement actions of the Chinese Securities Regulatory Commission (CSRC). The results from univariate analyses, where we compare fraud and nofraud firms, show that ownership and board characteristics are important in explaining fraud. However, using a bivariate probit model with partial observability we demonstrate that boardroom characteristics are important, while the type of owner is less relevant. In particular, the proportion of outside directors, the number of board meetings, and the tenure of the chairman are associated with the incidence of fraud. Our findings have implications for the design of appropriate corporate governance systems for listed firms. Moreover, our results provide information that can inform policy debates within the CSRC. D 2005 Elsevier B.V. All rights reserved. JEL classification: G34", "title": "" }, { "docid": "ff8089430cdae3e733b06a7aa1b759b4", "text": "We derive a model for consumer loan default and credit card expenditure. The default model is based on statistical models for discrete choice, in contrast to the usual procedure of linear discriminant analysis. The model is then extended to incorporate the default probability in a model of expected profit. The technique is applied to a large sample of applications and expenditure from a major credit card company. The nature of the data mandates the use of models of sample selection for estimation. The empirical model for expected profit produces an optimal acceptance rate for card applications which is far higher than the observed rate used by the credit card vendor based on the discriminant analysis. I am grateful to Terry Seaks for valuable comments on an earlier draft of this paper and to Jingbin Cao for his able research assistance. The provider of the data and support for this project has requested anonymity, so I must thank them as such. Their help and support are gratefully acknowledged. Participants in the applied econometrics workshop at New York University also provided useful commentary.", "title": "" }, { "docid": "9776f7ab59ffe29f158c23c1f8df70df", "text": "Rhythm game as one of the most-played game genres has its own attractiveness. Each song in the game gives its player new excitement to try another song or another difficulty level. However, behind every song being played is a lot of work. A beatmap should be created in order for a song to be played in the game. This paper presents an alternate way to create a beatmap that is considered playable for Osu Game utilizing beat and melody detection using machine learning approach and SVM as its learning method. The steps consists of notes detection and notes placement. Notes detection basically consists of features extraction from an audio file using DSP Java Library and learning process using Weka and LibSVM. However, detect the presence of notes only does not solve anything. The notes should be placed in the game using PRAAT and Note Placement Algorithm. From this process, a beatmap can be created from a song in about 3 minutes and the accuracy of the note detection is 86%.", "title": "" }, { "docid": "eb3a993e5302a45c11daa8d3482468c7", "text": "Network structure determination is an important issue in pattern classification based on a probabilistic neural network. In this study, a supervised network structure determination algorithm is proposed. The proposed algorithm consists of two parts and runs in an iterative way. The first part identifies an appropriate smoothing parameter using a genetic algorithm, while the second part determines suitable pattern layer neurons using a forward regression orthogonal algorithm. The proposed algorithm is capable of offering a fairly small network structure with satisfactory classification accuracy.", "title": "" }, { "docid": "8cb5659bdbe9d376e2a3b0147264d664", "text": "Group brainstorming is widely adopted as a design method in the domain of software development. However, existing brainstorming literature has consistently proven group brainstorming to be ineffective under the controlled laboratory settings. Yet, electronic brainstorming systems informed by the results of these prior laboratory studies have failed to gain adoption in the field because of the lack of support for group well-being and member support. Therefore, there is a need to better understand brainstorming in the field. In this work, we seek to understand why and how brainstorming is actually practiced, rather than how brainstorming practices deviate from formal brainstorming rules, by observing brainstorming meetings at Microsoft. The results of this work show that, contrary to the conventional brainstorming practices, software teams at Microsoft engage heavily in the constraint discovery process in their brainstorming meetings. We identified two types of constraints that occur in brainstorming meetings. Functional constraints are requirements and criteria that define the idea space, whereas practical constraints are limitations that prioritize the proposed solutions.", "title": "" }, { "docid": "2c8bfb9be08edfdac6d335bdcffe204c", "text": "Undoubtedly, the age of big data has opened new options for natural disaster management, primarily because of the varied possibilities it provides in visualizing, analyzing, and predicting natural disasters. From this perspective, big data has radically changed the ways through which human societies adopt natural disaster management strategies to reduce human suffering and economic losses. In a world that is now heavily dependent on information technology, the prime objective of computer experts and policy makers is to make the best of big data by sourcing information from varied formats and storing it in ways that it can be effectively used during different stages of natural disaster management. This paper aimed at making a systematic review of the literature in analyzing the role of big data in natural disaster management and highlighting the present status of the technology in providing meaningful and effective solutions in natural disaster management. The paper has presented the findings of several researchers on varied scientific and technological perspectives that have a bearing on the efficacy of big data in facilitating natural disaster management. In this context, this paper reviews the major big data sources, the associated achievements in different disaster management phases, and emerging technological topics associated with leveraging this new ecosystem of Big Data to monitor and detect natural hazards, mitigate their effects, assist in relief efforts, and contribute to the recovery and reconstruction processes.", "title": "" } ]
scidocsrr
f7c386b5b0552f5907d9cc8473e299ce
Mixture of Expert Agents for Handling Imbalanced Data Sets
[ { "docid": "5a3b8a2ec8df71956c10b2eb10eabb99", "text": "During a project examining the use of machine learning techniques for oil spill detection, we encountered several essential questions that we believe deserve the attention of the research community. We use our particular case study to illustrate such issues as problem formulation, selection of evaluation measures, and data preparation. We relate these issues to properties of the oil spill application, such as its imbalanced class distribution, that are shown to be common to many applications. Our solutions to these issues are implemented in the Canadian Environmental Hazards Detection System (CEHDS), which is about to undergo field testing.", "title": "" }, { "docid": "9b2f4394cabd31008773049c32dea963", "text": "Twenty-two decision tree, nine statistical, and two neural network algorithms are compared on thirty-two datasets in terms of classification accuracy, training time, and (in the case of trees) number of leaves. Classification accuracy is measured by mean error rate and mean rank of error rate. Both criteria place a statistical, spline-based, algorithm called POLYCLSSS at the top, although it is not statistically significantly different from twenty other algorithms. Another statistical algorithm, logistic regression, is second with respect to the two accuracy criteria. The most accurate decision tree algorithm is QUEST with linear splits, which ranks fourth and fifth, respectively. Although spline-based statistical algorithms tend to have good accuracy, they also require relatively long training times. POLYCLASS, for example, is third last in terms of median training time. It often requires hours of training compared to seconds for other algorithms. The QUEST and logistic regression algorithms are substantially faster. Among decision tree algorithms with univariate splits, C4.5, IND-CART, and QUEST have the best combinations of error rate and speed. But C4.5 tends to produce trees with twice as many leaves as those from IND-CART and QUEST.", "title": "" } ]
[ { "docid": "8991e09ba79a32f3ec878a6db441e1ca", "text": "This paper presents a highway vehicle counting method in compressed domain, aiming at achieving acceptable estimation performance approaching the pixel-domain methods. Such a task essentially is challenging because the available information (e.g. motion vector) to describe vehicles in videos is quite limited and inaccurate, and the vehicle count in realistic traffic scenes always varies greatly. To tackle this issue, we first develop a batch of low-level features, which can be extracted from the encoding metadata of videos, to mitigate the informational insufficiency of compressed videos. Then we propose a Hierarchical Classification based Regression (HCR) model to estimate the vehicle count from features. HCR hierarchically divides the traffic scenes into different cases according to vehicle density, such that the broad-variation characteristics of traffic scenes can be better approximated. Finally, we evaluated the proposed method on the real highway surveillance videos. The results show that our method is very competitive to the pixel-domain methods, which can reach similar performance along with its lower complexity.", "title": "" }, { "docid": "da91dc8ab78a585b81fba42bed1a6af3", "text": "Integrating magnetic parasitics in the design of LCC resonant converters provides a solution to reduce parts count and increase power density. This paper provides an efficient design procedure for small planar transformers which integrate transformer leakage inductance and magnetizing inductance into the resonant tank by employing an accurate parasitic prediction model. Finite element simulations were used to create the models using Design of Experiment (DoE) methodology. A planar transformer prototype was designed and tested within a 2.5W LLC resonant converter and results under different operating modes are included to illustrate the resonant behaviour and to validate the presented design procedure.", "title": "" }, { "docid": "140d81bc2d9d125ed43946ddee94d2e4", "text": "Cluster analysis plays an important role in decision-making process for many knowledge-based systems. There exist a wide variety of different approaches for clustering applications including the heuristic techniques, probabilistic models, and traditional hierarchical algorithms. In this paper, a novel heuristic approach based on big bang–big crunch algorithm is proposed for clustering problems. The proposed method not only takes advantage of heuristic nature to alleviate typical clustering algorithms such as k-means, but it also benefits from the memory-based scheme as compared to its similar heuristic techniques. Furthermore, the performance of the proposed algorithm is investigated based on several benchmark test functions as well as on the well-known datasets. The experimental results show the significant superiority of the proposed method over the similar algorithms.", "title": "" }, { "docid": "ff619ce19b787d32aa78a6ac295d1f1d", "text": "Mullerian duct anomalies (MDAs) are rare, affecting approximately 1% of all women and about 3% of women with poor reproductive outcomes. These congenital anomalies usually result from one of the following categories of abnormalities of the mullerian ducts: failure of formation (no development or underdevelopment) or failure of fusion of the mullerian ducts. The American Fertility Society (AFS) classification of uterine anomalies is widely accepted and includes seven distinct categories. MR imaging has consolidated its role as the imaging modality of choice in the evaluation of MDA. MRI is capable of demonstrating the anatomy of the female genital tract remarkably well and is able to provide detailed images of the intra-uterine zonal anatomy, delineate the external fundal contour of the uterus, and comprehensively image the entire female pelvis in multiple imaging planes in a single examination. The purpose of this pictorial essay is to show the value of MRI in the diagnosis of MDA and to review the key imaging features of anomalies of formation and fusion, emphasizing the relevance of accurate diagnosis before therapeutic intervention.", "title": "" }, { "docid": "41438b1c19767e55e124847d97293c95", "text": "In this paper we discuss a stylized fact on long memory process of volatility cluster phenomena by using Minkowski metric for GARCH (1,1). Also presented result of minus sign of volatility in reversed direction of time scale. It is named as dark volatility or hidden risk fear field.", "title": "" }, { "docid": "18bd983abb9af56667ff476364c821f8", "text": "Lie is a false statement made with deliberate intent to deceive, this is intentional untruth. People use different technologies of lie detection as Pattern recognition is a science to discover if an individual is telling the truth or lying. Patterns can describe some characteristics of liars, in this work to face and speech specifically. Face recognition take patttern of face and speech recognition of voice or speech to text. So this paper pretends realize a compendium on lie detection techniques and pattern recognition face and speech. It permits to review the actual state of the tecnhologies realized in these recognitions. Also It presents an analysis of tecnhologies using some of these techniques to resum the result.", "title": "" }, { "docid": "de8e0f866ee88ab01736073ceb536239", "text": "This paper presents a newly developed high torque density motor design for electric racing cars. An interior permanent magnet motor with a flux-concentration configuration is proposed. The 18slots/16poles motor has pre-formed tooth wound coils, rare-earth magnets type material, whilst employing a highly efficient cooling system with forced oil convection through the slot and forced air convection in the airgap. Losses are minimized either by using special materials, i.e. non-oriented thin gage, laminated steel or special construction, i.e. magnet segmentation or twisted wires. The thermal behavior of the motor is modelled and tested using Le Mans racing typical driving cycle. Several prototypes have been built and tested to validate the proposed configuration.", "title": "" }, { "docid": "0a50e10df0a8e4a779de9ed9bf81e442", "text": "This paper presents a novel self-correction method of commutation point for high-speed sensorless brushless dc motors with low inductance and nonideal back electromotive force (EMF) in order to achieve low steady-state loss of magnetically suspended control moment gyro. The commutation point before correction is obtained by detecting the phase of EMF zero-crossing point and then delaying 30 electrical degrees. Since the speed variation is small between adjacent commutation points, the difference of the nonenergized phase's terminal voltage between the beginning and the end of commutation is mainly related to the commutation error. A novel control method based on model-free adaptive control is proposed, and the delay degree is corrected by the controller in real time. Both the simulation and experimental results show that the proposed correction method can achieve ideal commutation effect within the entire operating speed range.", "title": "" }, { "docid": "712d5e9e6b29e63949cf0dddd55f9b1d", "text": "Mobile Internet availability, performance and reliability have remained stubbornly opaque since the rise of cellular data access. Conducting network measurements can give us insight into user-perceived network conditions, but doing so requires careful consideration of device state and efficient use of scarce resources. Existing approaches address these concerns in ad-hoc ways.\n In this work we propose Mobilyzer, a platform for conducting mobile network measurement experiments in a principled manner. Our system is designed around three key principles: network measurements from mobile devices require tightly controlled access to the network interface to provide isolation; these measurements can be performed efficiently using a global view of available device resources and experiments; and distributing the platform as a library to existing apps provides the incentives and low barrier to adoption necessary for large-scale deployments. We describe our current design and implementation, and illustrate how it provides measurement isolation for applications, efficiently manages measurement experiments and enables a new class of experiments for the mobile environment.", "title": "" }, { "docid": "11112e1738bd27f41a5b57f07b71292c", "text": "Rotor-cage fault detection in inverter-fed induction machines is still difficult nowadays as the dynamics introduced by the control or load influence the fault-indicator signals commonly applied. In addition, detection is usually possible only when the machine is operated above a specific load level to generate a significant rotor-current magnitude. This paper proposes a new method of detecting rotor-bar defects at zero load and almost at standstill. The method uses the standard current sensors already present in modern industrial inverters and, hence, is noninvasive. It is thus well suited as a start-up test for drives. By applying an excitation with voltage pulses using the switching of the inverter and then measuring the resulting current slope, a new fault indicator is obtained. As a result, it is possible to clearly identify the fault-induced asymmetry in the machine's transient reactances. Although the transient-flux linkage cannot penetrate the rotor because of the cage, the faulty bar locally influences the zigzag flux, leading to a significant change in the transient reactances. Measurement results show the applicability and sensitivity of the proposed method.", "title": "" }, { "docid": "c969b4ad07cefc81c3b39ac8e71e520e", "text": "In this tutorial paper we give a general introduction to verification and validation of simulation models, define the various validation techniques, and present a recommended model validation procedure.", "title": "" }, { "docid": "001b02a8bd211d6dccefa93b1e2a9e6b", "text": "Brain tumor is a life threatening disease. It is any mass that outcomes from abnormal growths of cells in the brain. In this paper a brain tumor diagnostic system is developed. The system determines the type of the tumor which is benign or malignant using the Magnetic Resonance Imaging (MRI) images which are in the Digital Imaging and Communications in Medicine (DICOM) standard format. The system is assessed based on a series of brain tumor images. Experimental results demonstrate that the proposed system has a classification accuracy of 98.9%.", "title": "" }, { "docid": "a354949d97de673e71510618a604e264", "text": "Fast Magnetic Resonance Imaging (MRI) is highly in demand for many clinical applications in order to reduce the scanning cost and improve the patient experience. This can also potentially increase the image quality by reducing the motion artefacts and contrast washout. However, once an image field of view and the desired resolution are chosen, the minimum scanning time is normally determined by the requirement of acquiring sufficient raw data to meet the Nyquist–Shannon sampling criteria. Compressive Sensing (CS) theory has been perfectly matched to the MRI scanning sequence design with much less required raw data for the image reconstruction. Inspired by recent advances in deep learning for solving various inverse problems, we propose a conditional Generative Adversarial Networks-based deep learning framework for de-aliasing and reconstructing MRI images from highly undersampled data with great promise to accelerate the data acquisition process. By coupling an innovative content loss with the adversarial loss our de-aliasing results are more realistic. Furthermore, we propose a refinement learning procedure for training the generator network, which can stabilise the training with fast convergence and less parameter tuning. We demonstrate that the proposed framework outperforms state-of-the-art CS-MRI methods, in terms of reconstruction error and perceptual image quality. In addition, our method can reconstruct each image in 0.22ms–0.37ms, which is promising for real-time applications.", "title": "" }, { "docid": "67992d0c0b5f32726127855870988b01", "text": "We present a method for extracting social networks from literature, namely, nineteenth-century British novels and serials. We derive the networks from dialogue interactions, and thus our method depends on the ability to determine when two characters are in conversation. Our approach involves character name chunking, quoted speech attribution and conversation detection given the set of quotes. We extract features from the social networks and examine their correlation with one another, as well as with metadata such as the novel’s setting. Our results provide evidence that the majority of novels in this time period do not fit two characterizations provided by literacy scholars. Instead, our results suggest an alternative explanation for differences in social networks.", "title": "" }, { "docid": "4c85c55ba02b2823aad33bf78d224b61", "text": "We developed an affordance-based methodology to support environmentally conscious behavior (ECB) that conserves resources such as materials, energy, etc. While studying concepts that aim to support ECB, we noted that characteristics of products that enable ECB tend to be more accurately described as affordances than functions. Therefore, we became interested in affordances, and specifically how affordances can be used to design products that support ECB. Affordances have been described as possible ways of interacting with products, or context-dependent relations between artifacts and users. Other researchers have explored affordances in lieu of functions as a basis for design, and developed detailed deductive methods of discovering affordances in products. We abstracted desired affordances from patterns and principles we observed to support ECB, and generated concepts based on those affordances. As a possible shortcut to identifying and implementing relevant affordances, we introduced the affordance-transfer method. This method involves altering a product’s affordances to add desired features from related products. Promising sources of affordances include lead-user and other products that support resource conservation. We performed initial validation of the affordance-transfer method and observed that it can improve the usefulness of the concepts that novice designers generate to support ECB. [DOI: 10.1115/1.4025288]", "title": "" }, { "docid": "e28f2a2d5f3a0729943dca52da5d45b6", "text": "Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. We propose a novel, accurate tightly-coupled visual-inertial odometry pipeline for such cameras that leverages their outstanding properties to estimate the camera ego-motion in challenging conditions, such as high-speed motion or high dynamic range scenes. The method tracks a set of features (extracted on the image plane) through time. To achieve that, we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure, yielding motion-compensated event frames. We then combine these feature tracks in a keyframebased, visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera’s 6-DOF pose, velocity, and IMU biases. The proposed method is evaluated quantitatively on the public Event Camera Dataset [19] and significantly outperforms the state-of-the-art [28], while being computationally much more efficient: our pipeline can run much faster than real-time on a laptop and even on a smartphone processor. Furthermore, we demonstrate qualitatively the accuracy and robustness of our pipeline on a large-scale dataset, and an extremely high-speed dataset recorded by spinning an event camera on a leash at 850 deg/s.", "title": "" }, { "docid": "a105e6bc9a3446603959dac61ab50065", "text": "Recent work has examined infrastructure-mediated sensing as a practical, low-cost, and unobtrusive approach to sensing human activity in the physical world. This approach is based on the idea that human activities (e.g., running a dishwasher, turning on a reading light, or walking through a doorway) can be sensed by their manifestations in an environment's existing infrastructures (e.g., a home's water, electrical, and HVAC infrastructures). This paper presents HydroSense, a low-cost and easily-installed single-point sensor of pressure within a home's water infrastructure. HydroSense supports both identification of activity at individual water fixtures within a home (e.g., a particular toilet, a kitchen sink, a particular shower) as well as estimation of the amount of water being used at each fixture. We evaluate our approach using data collected in ten homes. Our algorithms successfully identify fixture events with 97.9% aggregate accuracy and can estimate water usage with error rates that are comparable to empirical studies of traditional utility-supplied water meters. Our results both validate our approach and provide a basis for future improvements.", "title": "" }, { "docid": "697ed30a5d663c1dda8be0183fa4a314", "text": "Due to the Web expansion, the prediction of online news popularity is becoming a trendy research topic. In this paper, we propose a novel and proactive Intelligent Decision Support System (IDSS) that analyzes articles prior to their publication. Using a broad set of extracted features (e.g., keywords, digital media content, earlier popularity of news referenced in the article) the IDSS first predicts if an article will become popular. Then, it optimizes a subset of the articles features that can more easily be changed by authors, searching for an enhancement of the predicted popularity probability. Using a large and recently collected dataset, with 39,000 articles from the Mashable website, we performed a robust rolling windows evaluation of five state of the art models. The best result was provided by a Random Forest with a discrimination power of 73%. Moreover, several stochastic hill climbing local searches were explored. When optimizing 1000 articles, the best optimization method obtained a mean gain improvement of 15 percentage points in terms of the estimated popularity probability. These results attest the proposed IDSS as a valuable tool for online news authors.", "title": "" }, { "docid": "4500c668414d0cb1ff18bb8ec00f1d8f", "text": "Governments around the world are increasingly utilising online platforms and social media to engage with, and ascertain the opinions of, their citizens. Whilst policy makers could potentially benefit from such enormous feedback from society, they first face the challenge of making sense out of the large volumes of data produced. In this article, we show how the analysis of argumentative and dialogical structures allows for the principled identification of those issues that are central, controversial, or popular in an online corpus of debates. Although areas such as controversy mining work towards identifying issues that are a source of disagreement, by looking at the deeper argumentative structure, we show that a much richer understanding can be obtained. We provide results from using a pipeline of argument-mining techniques on the debate corpus, showing that the accuracy obtained is sufficient to automatically identify those issues that are key to the discussion, attracting proportionately more support than others, and those that are divisive, attracting proportionately more conflicting viewpoints.", "title": "" }, { "docid": "8941cc8c4b2d7a354baf03fb52f43a07", "text": "Floor surfaces are notable for the diverse roles that they play in our negotiation of everyday environments. Haptic communication via floor surfaces could enhance or enable many computer-supported activities that involve movement on foot. In this paper, we discuss potential applications of such interfaces in everyday environments and present a haptically augmented floor component through which several interaction methods are being evaluated. We describe two approaches to the design of structured vibrotactile signals for this device. The first is centered on a musical phrase metaphor, as employed in prior work on tactile display. The second is based upon the synthesis of rhythmic patterns of virtual physical impact transients. We report on an experiment in which participants were able to identify communication units that were constructed from these signals and displayed via a floor interface at well above chance levels. The results support the feasibility of tactile information display via such interfaces and provide further indications as to how to effectively design vibrotactile signals for them.", "title": "" } ]
scidocsrr
5db02617efcb77ded53bd44c8af9c1cf
Online convolutional dictionary learning
[ { "docid": "ef95fb0c6fae05eb1811928af55c1dbb", "text": "We present the recursive least squares dictionary learning algorithm, RLS-DLA, which can be used for learning overcomplete dictionaries for sparse signal representation. Most DLAs presented earlier, for example ILS-DLA and K-SVD, update the dictionary after a batch of training vectors has been processed, usually using the whole set of training vectors as one batch. The training set is used iteratively to gradually improve the dictionary. The approach in RLS-DLA is a continuous update of the dictionary as each training vector is being processed. The core of the algorithm is compact and can be effectively implemented. The algorithm is derived very much along the same path as the recursive least squares (RLS) algorithm for adaptive filtering. Thus, as in RLS, a forgetting factor ¿ can be introduced and easily implemented in the algorithm. Adjusting ¿ in an appropriate way makes the algorithm less dependent on the initial dictionary and it improves both convergence properties of RLS-DLA as well as the representation ability of the resulting dictionary. Two sets of experiments are done to test different methods for learning dictionaries. The goal of the first set is to explore some basic properties of the algorithm in a simple setup, and for the second set it is the reconstruction of a true underlying dictionary. The first experiment confirms the conjectural properties from the derivation part, while the second demonstrates excellent performance.", "title": "" }, { "docid": "e2a9bb49fd88071631986874ea197bc1", "text": "We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.", "title": "" }, { "docid": "32bb1110b3f30617e8a29a346c893e56", "text": "Article history: Available online 3 May 2016", "title": "" } ]
[ { "docid": "27d073103354137ea71801f37422b3a9", "text": "This paper presents Sniper, an automated memory leak detection tool for C/C++ production software. To track the staleness of allocated memory (which is a clue to potential leaks) with little overhead (mostly <3%), Sniper leverages instruction sampling using performance monitoring units available in commodity processors. It also offloads the time- and space-consuming analyses, and works on the original software without modifying the underlying memory allocator; it neither perturbs the application execution nor increases the heap size. The Sniper can even deal with multithreaded applications with very low overhead. In particular, it performs a statistical analysis, which views memory leaks as anomalies, for automated and systematic leak determination. Consequently, it accurately detected real-world memory leaks with no false positive, and achieved an F-measure of 81% on average for 17 benchmarks stress-tested with various memory leaks.", "title": "" }, { "docid": "13d9a31724234ec646de720f93ee8817", "text": "Lymphoid enhancer-binding factor-1 (LEF1) is a key transcription factor of Wnt signaling. We recently showed that aberrant LEF1 expression induces acute myeloid leukemia (AML) in mice, and found high LEF1 expression in a subset of cytogenetically normal AML (CN-AML) patients. Whether LEF1 expression associates with clinical and molecular patient characteristics and treatment outcomes remained unknown. We therefore studied LEF1 expression in 210 adults with CN-AML treated on German AML Cooperative Group trials using microarrays. High LEF1 expression (LEF1high) associated with significantly better relapse-free survival (RFS; P < .001), overall survival (OS; P < .001), and event-free survival (EFS; P < .001). In multivariable analyses adjusting for established prognosticators, LEF1high status remained associated with prolonged RFS (P = .007), OS (P = .01), and EFS (P = .003). In an independent validation cohort of 196 CN-AML patients provided by the German-Austrian AML Study Group, LEF1high patients had significantly longer OS (P = .02) and EFS (P = .04). We validated the prognostic relevance of LEF1 expression by quantitative PCR, thereby providing a clinically applicable platform to incorporate this marker into future risk-stratification systems for CN-AML. Gene-expression profiling and immunophenotyping revealed up-regulation of lymphopoiesis-related genes and lymphoid cell-surface antigens in LEF1high patients. In summary, we provide evidence that high LEF1 expression is a novel favorable prognostic marker in CN-AML.", "title": "" }, { "docid": "76d514ee806b154b4fef2fe2c63c8b27", "text": "Attacks on systems and organisations increasingly exploit human actors, for example through social engineering, complicating their formal treatment and automatic identification. Formalisation of human behaviour is difficult at best, and attacks on socio-technical systems are still mostly identified through brainstorming of experts. In this work we formalize attack tree generation including human factors; based on recent advances in system models we develop a technique to identify possible attacks analytically, including technical and human factors. Our systematic attack generation is based on invalidating policies in the system model by identifying possible sequences of actions that lead to an attack. The generated attacks are precise enough to illustrate the threat, and they are general enough to hide the details of individual steps.", "title": "" }, { "docid": "f8070d46957c2ad0e0c2e28a45337900", "text": "Marfan's syndrome is a systemic disorder of connective tissue caused by mutations in the extracellular matrix protein fibrillin 1. Cardinal manifestations include proximal aortic aneurysm, dislocation of the ocular lens, and long-bone overgrowth. Important advances have been made in the diagnosis and medical and surgical care of affected individuals, yet substantial morbidity and premature mortality remain associated with this disorder. Progress has been made with genetically defined mouse models to elucidate the pathogenetic sequence that is initiated by fibrillin-1 deficiency. The new understanding is that many aspects of the disease are caused by altered regulation of transforming growth factor beta (TGFbeta), a family of cytokines that affect cellular performance, highlighting the potential therapeutic application of TGFbeta antagonists. Insights derived from studying this mendelian disorder are anticipated to have relevance for more common and non-syndromic presentations of selected aspects of the Marfan phenotype.", "title": "" }, { "docid": "17fa294fea1fbf7780983e8063d1bb2c", "text": "Most of the earth’s land surface is inaccessible to regular vehicles so there is a need for mobile robots that can handle difficult terrain. Today’s robots are mostly designed for traveling over relatively smooth, level or inclined, surfaces. This survey will however discuss different locomotion systems for mobile robots used in difficult terrain. Only robots that use ground contact for propulsion are considered which means that robots travelling through air or water are not included.", "title": "" }, { "docid": "898897564f0cf3672cc729669bb8a445", "text": "Knit, woven, and nonwoven fabrics offer a diverse range of stretch and strain limiting mechanical properties that can be leveraged to produce tailored, whole-body deformation mechanics of soft robotic systems. This work presents new insights and methods for combining heterogeneous fabric material layers to create soft fabric-based actuators. This work demonstrates that a range of multi-degree-of-freedom motions can be generated by varying fabrics and their layered arrangements when a thin airtight bladder is inserted between them and inflated. Specifically, we present bending and straightening fabric-based actuators that are simple to manufacture, lightweight, require low operating pressures, display a high torque-to-weight ratio, and occupy a low volume in their unpressurized state. Their utility is demonstrated through their integration into a glove that actively assists hand opening and closing.", "title": "" }, { "docid": "2466ac1ce3d54436f74b5bb024f89662", "text": "In this paper we discuss our work on applying media theory to the creation of narrative augmented reality (AR) experiences. We summarize the concepts of remediation and media forms as they relate to our work, argue for their importance to the development of a new medium such as AR, and present two example AR experiences we have designed using these conceptual tools. In particular, we focus on leveraging the interaction between the physical and virtual world, remediating existing media (film, stage and interactive CD-ROM), and building on the cultural expectations of our users.", "title": "" }, { "docid": "c85ee4139239b17d98b0d77836e00b72", "text": "We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.", "title": "" }, { "docid": "3ea32e9e59a43d948d61fde8b5179b5a", "text": "This paper presents a cable-driven parallel mechanism as a haptic interface and its underlying control method. This human-sized, three-degree-of-freedom mechanism has a tetrahedral architecture, four cables and evolves in three-dimensional space. A brief review of the kinematics of the mechanism is presented. Also, an admittance control law coupled with a closed-loop velocity controller is proposed. The control method is then refined by introducing adaptations for smooth surfaces and sharp edges. This control method is then validated by experimental results. Furthermore, the geometry of the mechanism is identified by a method that does not require any other sensor than the motor encoders.", "title": "" }, { "docid": "8d7b0829c1172eff0aa00f34352a4c62", "text": "As a commonly used technique in data preprocessing, feature selection selects a subset of informative attributes or variables to build models describing data. By removing redundant and irrelevant or noise features, feature selection can improve the predictive accuracy and the comprehensibility of the predictors or classifiers. Many feature selection algorithms with different selection criteria has been introduced by researchers. However, it is discovered that no single criterion is best for all applications. In this paper, we propose a framework based on a genetic algorithm (GA) for feature subset selection that combines various existing feature selection methods. The advantages of this approach include the ability to accommodate multiple feature selection criteria and find small subsets of features that perform well for a particular inductive learning algorithm of interest to build the classifier. We conducted experiments using three data sets and three existing feature selection methods. The experimental results demonstrate that our approach is a robust and effective approach to find subsets of features with higher classification accuracy and/or smaller size compared to each individual feature selection algorithm. F. Tan (B) · X. Fu · Y. Zhang · A. G. Bourgeois Department of Computer Science, Georgia State University, Atlanta, GA 30302, USA e-mail: ftan@student.gsu.edu X. Fu e-mail: xfu1@gsu.edu Y. Zhang e-mail: yzhang@cs.gsu.edu A. G. Bourgeois e-mail: anu@cs.gsu.edu", "title": "" }, { "docid": "e0d42be891c0278360aad3c07a3f3a8f", "text": "In this article we compare and integrate two well-established approaches to motivating therapeutic change, namely self-determination theory (SDT; Deci & Ryan, 1985, ) and motivational interviewing (MI; Miller & Rollnick, 1991, ). We show that SDT's theoretical focus on the internalization of therapeutic change and on the issue of need-satisfaction is fully compatible with key principles and clinical strategies within MI. We further suggest that basic need-satisfaction might be an important mechanism accounting for the positive effects of MI. Conversely, MI principles may provide SDT researchers with new insight into the application of SDT's theoretical concept of autonomy-support, and suggest new ways of testing and developing SDT. In short, the applied approach of MI and the theoretical approach of SDT might be fruitfully married, to the benefit of both.", "title": "" }, { "docid": "6235c7e1682b5406c95f91f9259288f8", "text": "Model-driven development is an emerging area in software development that provides a way to express system requirements and architecture at a high level of abstraction through models. It involves using these models as the primary artifacts during the development process. One aspect that is holding back MDD from more wide-spread adoption is the lack of a well established and easy way of performing model to model (M2M) transformations. We propose to explore and compare popular M2M model transformation languages in existence: EMT , Kermeta, and ATL. Each of these languages support transformation of Ecore models within the Eclipse Modeling Framework (EMF). We attempt to implement the same transformation rule on identical meta models in each of these languages to achieve the appropriate transformed model. We provide our observations in using each tool to perform the transformation and comment on each language/tool’s expressive power, ease of use, and modularity. We conclude by noting that ATL is our language / tool of choice because it strikes a balance between ease of use and expressive power and still allows for modularity. We believe this, in conjunction with ATL’s role in the official Eclipse M2M project will lead to widespread use of ATL and, hopefully, a step forward in M2M transformations.", "title": "" }, { "docid": "bd3e5a403cc42952932a7efbd0d57719", "text": "The acoustic echo cancellation system is very important in the communication applications that are used these days; in view of this importance we have implemented this system practically by using DSP TMS320C6713 Starter Kit (DSK). The acoustic echo cancellation system was implemented based on 8 subbands techniques using Least Mean Square (LMS) algorithm and Normalized Least Mean Square (NLMS) algorithm. The system was evaluated by measuring the performance according to Echo Return Loss Enhancement (ERLE) factor and Mean Square Error (MSE) factor. Keywords—Acoustic echo canceller; Least Mean Square (LMS); Normalized Least Mean Square (NLMS); TMS320C6713; 8 subbands adaptive filter", "title": "" }, { "docid": "b5d88aba6371d90531fd6d557c1cd46d", "text": "Ontologies offer significant benefits to multi-agent systems: interoperability, reusability, support for multi-agent system (MAS) development activities (such as system analysis and agent knowledge modeling) and support for MAS operation (such as agent communication and reasoning). This paper presents an ontology-based methodology, MOBMAS, for the analysis and design of multi-agent systems. MOBMAS is the first methodology that explicitly identifies and implements the various ways in which ontologies can be used in the MAS development process and integrated into the MAS model definitions. In this paper, we present comprehensive documentation and validation of MOBMAS. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a604527951768b088fe2e40104fa78bb", "text": "In this study, the Multi-Layer Perceptron (MLP)with Back-Propagation learning algorithm are used to classify to effective diagnosis Parkinsons disease(PD).It’s a challenging problem for medical community.Typically characterized by tremor, PD occurs due to the loss of dopamine in the brains thalamic region that results in involuntary or oscillatory movement in the body. A feature selection algorithm along with biomedical test values to diagnose Parkinson disease.Clinical diagnosis is done mostly by doctor’s expertise and experience.But still cases are reported of wrong diagnosis and treatment.Patients are asked to take number of tests for diagnosis.In many cases,not all the tests contribute towards effective diagnosis of a disease.Our work is to classify the presence of Parkinson disease with reduced number of attributes.Original,22 attributes are involved in classify.We use Information Gain to determine the attributes which reduced the number of attributes which is need to be taken from patients.The Artificial neural networks is used to classify the diagnosis of patients.Twenty-Two attributes are reduced to sixteen attributes.The accuracy is in training data set is 82.051% and in the validation data set is 83.333%. Keywords—Data mining , classification , Parkinson disease , Artificial neural networks , Feature Selection , Information Gain", "title": "" }, { "docid": "49ee2dafe659cfb82c623a3e3e093f12", "text": "This study examined the naturally occurring dimensions of the dentogingival junction in 10 adult human cadaver jaws. The connective tissue attachment, epithelial attachment, loss of attachment, and sulcus depth were measured histomorphometrically for 171 tooth surfaces. Mean measurements were 1.34 +/- 0.84 mm for sulcus depth; 1.14 +/- 0.49 mm for epithelial attachment; 0.77 +/- 0.32 mm for connective tissue attachment; and 2.92 +/- 1.69 mm for loss of attachment. These dimensions, as measured in this study, support the concept that the connective tissue attachment is a variable width within a more narrow distribution and range than the epithelial attachment, sulcus depth, or loss of attachment. The level of the loss of attachment was not predictive of the connective tissue attachment length.", "title": "" }, { "docid": "3fa30df910c964bb2bf27a885aa59495", "text": "In an Intelligent Environment, he user and the environment work together in a unique manner; the user expresses what he wishes to do, and the environment recognizes his intentions and helps out however it can. If well-implemented, such an environment allows the user to interact with it in the manner that is most natural for him personally. He should need virtually no time to learn to use it and should be more productive once he has. But to implement a useful and natural Intelligent Environment, he designers are faced with a daunting task: they must design a software system that senses what its users do, understands their intentions, and then responds appropriately. In this paper we argue that, in order to function reasonably in any of these ways, an Intelligent Environment must make use of declarative representations of what the user might do. We present our evidence in the context of the Intelligent Classroom, a facility that aids a speaker in this way and uses its understanding to produce a video of his presentation.", "title": "" }, { "docid": "a208e4f4e6092a731d4ec662c1cea1bc", "text": "The CDMA channel with randomly and independently chosen spreading sequences accurately models the situation where pseudonoise sequences span many symbol periods. Furthermore, its analysis provides a comparison baseline for CDMA channels with deterministic signature waveforms spanning one symbol period. We analyze the spectral efficiency (total capacity per chip) as a function of the number of users, spreading gain, and signal-to-noise ratio, and we quantify the loss in efficiency relative to an optimally chosen set of signature sequences and relative to multiaccess with no spreading. White Gaussian background noise and equal-power synchronous users are assumed. The following receivers are analyzed: a) optimal joint processing, b) single-user matched filtering, c) decorrelation, and d) MMSE linear processing.", "title": "" }, { "docid": "77c8a86fba0183e2b9183ba823e9d9cf", "text": "The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy.", "title": "" }, { "docid": "332afbff53a2552d392bb6e21ab50ab9", "text": "Twitter is an important and influential social media platform, but much research into its uses remains centred around isolated cases – e.g. of events in political communication, crisis communication, or popular culture, often coordinated by shared hashtags (brief keywords, prefixed with the symbol ‘#’). In particular, a lack of standard metrics for comparing communicative patterns across cases prevents researchers from developing a more comprehensive perspective on the diverse, sometimes crucial roles which hashtags play in Twitter-based communication. We address this problem by outlining a catalogue of widely applicable, standardised metrics for analysing Twitter-based communication, with particular focus on hashtagged exchanges. We also point to potential uses for such metrics, presenting an indication of what broader comparisons of diverse cases can", "title": "" } ]
scidocsrr